ADVERTISEMENT
  • Amplify Ad_LivingWithRiskUrgentCare_728x90_NA_DISP

Redefining EMR Usability

2 Comments

When I left Manhattan for the Society of Academic Emergency Medicine (SAEM) annual meeting in Boston, I was ready for a change of scenery. We had gone live with a new information system in our emergency department just a month before. While the vendors thought it went smoothly enough, and the financial hit seemed (for the most part) mitigated, I was still fielding a lot of requests from my physician colleagues.

Recent studies suggest that EMR usability will be judged based primarily on error prevention.  

When I left Manhattan for the Society of Academic Emergency Medicine (SAEM) annual meeting in Boston, I was ready for a change of scenery. We had gone live with a new information system in our emergency department just a month before. While the vendors thought it went smoothly enough, and the financial hit seemed (for the most part) mitigated, I was still fielding a lot of requests from my physician colleagues. As I examined that physician feedback in light of some interesting informatics research presented at SAEM, I learned a few important things about EDIS usability.

ADVERTISEMENT
Amplify LivingWithRiskUrgentCare_300x250_NA_DISP

In the first few days and weeks after our go-live, the physician requests focused on patient safety and the fundamentals of navigating the electronic record. Then, in the middle of the month, the emphasis shifted toward documenting encounters with speed and completeness. Finally, in those last days before Boston, requests seemed to be getting a little nit-picky. Missing synonyms, for instance, for HPI templates (typing in “dizzy” brings up the proper template, but not “dizziness or “vertigo”). Some of the procedure templates were too generic (we had a half-dozen click boxes on patient consent, which seemed appropriate for central line placement but less so for cerumen disimpaction). Ordering an ECG outside an order set required users to specify portable vs. bedside, which seemed redundant.

All the requests had merit, of course. Folks were just getting used to the basics and settling in for the long haul, and minor EMR annoyances were becoming more annoying with time and repetition. I was working with our vendor and department leadership to smooth through the wrinkles and make our users happy with the new system. But I could feel the focus shifting to usability, and I knew that this would be a long, twilight struggle.

As I understood it, EMR usability was about intuitiveness, consistency and efficiency: minimizing the number of clicks to get something done, and minimizing the disruptions to workflow when something unexpected occurs. I thought a usable system was something you just “know when you see it.” Prior to federal incentives, the anemic adoption of electronic medical records (EMR) was blamed on poor usability. And the lack of formal usability criteria in the recent Meaningful Use incentives has been lamented – since we’re focused on so many features but not the user experience, it seems natural that the user experience will suffer.

ADVERTISEMENT

I’ve actually been involved in EMR usability studies (for a different company than our go-live vendor). We used software to time users on tasks, count clicks and mouse movements, look at user facial expressions and utterances. It’s impressive how much clicking can be required to document a laceration repair, or how a user’s face can contort when writing discharge paperwork.

I felt we could still do a good deal to improve usability of our system after go-live, but at some point, we’d come up against the limitations in the vendor’s software. Some clicks can’t be eliminated, the way the current system was designed. Some orders can’t be simplified. The components of the banana bag, for instance, have to be ordered individually each time. The ECG always had to be specified as portable or bedside, even if techs didn’t pay attention to that field.

And so it was that I ended up at the SAEM research forum, browsing through posters and presentations, hoping to find some inspiring usability research.

ADVERTISEMENT

I wasn’t disappointed (see the sidebar on some interesting future tech), but was surprised by the usability research I did see. I’m referring specifically to two oral abstracts out of Beth Israel in NYC, presented on the final day at SAEM. Both presentations considered the frustrating EMR phenomenon of placing orders on the wrong patient.

The first, presented by Kar-Mun Woo, looked at wrong-patient orders from the perspective of self-reporting (If you’ve ever seen these drop-down list prompts, you know how tempting it is to blow past them – especially if you’re in the midst of correcting a mistake). Woo’s group compared this self-reporting to a proxy for wrong-patient orders, by counting situations where doctors canceled orders on one patient and rapidly placed them on another. If doctors told the system that they were doing this because of a “change in treatment plan” it was considered, well, a “falsely documented” patient identification error. It turns out most docs were documenting these errors honestly, and a small fraction was responsible for most of the (presumed) false documentation. (Abstract 624)

The second presentation, by Anurag Gupta, looked at self-reported wrong-patient errors and tried to find trends. They were more common among residents (though maybe residents were more honest, or wrote more orders). There was no difference between night shift or day shift. Surprisingly, “risky” drugs like insulin or heparin were more likely to be wrong compared to, say, acetaminophen.

Interesting stuff, but it seemed pretty narrowly focused, and far from what I conceived of usability research. Where were the satisfaction surveys? Time-on-task reports? Click counts? But by focusing on measuring errors, both trials anticipated something that I hadn’t: error measurement and prevention is the standard by which usability will be defined. (Abstract 625)

ADVERTISEMENT

Recently, the National Institute on Standards and Technology (NIST) held a workshop on EMR usability. The NIST, as it turns out, has had a decades-long tradition of building consensus and defining standards for government, industry, and consumer groups. Reports from the meeting suggest the standard for EMR usability is going to be the prevention of medical errors. We can expect that CMS and Office of the National Coordinator of Health IT will be following these deliberations, and that EMRs will need to demonstrate some threshold for usability for certification, before hospitals can earn meaningful use dollars for installing these EMRs.

Since measurement of errors is going to be integral in determining usability, the Beth Israel group’s efforts will only grow in significance. I only hope the goal of measuring errors expands to include errors resulting from non-intuitive or inconsistent EMR processes, alert fatigue, and other frustrating features of many modern EMRs. There are already features of meaningful use and EMR certification geared towards patient safety. Limiting the definition of usability, while still ignoring an often-mediocre user experience . . .  would be an error.

TECH FORWARD –
More innovative research presented at the SAEM annual meeting

POV Video Documentation
Ever wanted to walk around the ED with a hands-free camera? Rebecca Nelson and others from Hennepin presented some data on Point-of-View documentation. POV video recording is available, supplied by (of all people) the TASER company. Patients were enrolled prior to being approached by the camera-wielding ED doc. Patients were excluded if they had chief complaints concerning their genitalia – also kids, non-English speakers and the critically ill were excluded. Half of patients were excluded, or left, or refused to participate. Of the other half, 4% had equipment failure and 1.7% of docs forgot to turn the camera on for the encounter. All told, 393 encounters were recorded. Of those reviewed (29% or 505 hours) some missing audio or v
ideo was also noted, but not much. The authors concluded that POV recording seemed feasible for documenting encounters, and I expect we’ll be hearing more from this group and their fun equipment. (abstract 257)

Infrared physician tracking
Understanding clinical movement patterns and process times is part of lean performance improvement. Shuji Uemura’s group at UMass tracked physician movements with different methods, looking at real-time location systems (RTLS) accuracy. Encounters between doctors and patients lasting 15 or more seconds were observed and trailed with various systems. It turns out 64 lamp infrared (IR) system was more sensitive than a WiFi/IR combo, and much better than WiFi alone. While I hope this research leads to more optimal staffing and resource allocation, there are some Big Brother overtones that make me queasy. (abstract 536).

Image quality for mobile diagnosis
There are many barriers to telemedicine, but also many tools to help make it possible. Peter Dowiatt and others at George Washington University looked at the quality of patient-generated images sent to physicians by mobile phone. This was a prospective trial of a convenience sample of patient images of abscesses, rashes, and lacerations. Doctors who received the emailed pictures were asked if they could determine management based on image alone. Image quality was rated as very good, though not many images were more than three megapixels. Like the other trials, this seems like the start of a big experiment on off-the-shelf tech for emergency telemedicine (abstract 251).

iPad efficiency
Since the iPad’s introduction, I’ve been hoping that tablet computers would be shown to impact EM operations. Steve Horng of Beth Israel Deaconess in Boston studied iPad usage at the ED bedside. His group looked at time spent on ED workstations over two months, and also counted session logins. They performed a multivariate linear regression for doctor training level and shift location. Across 168 shifts, among 13 doctors, iPad use was associated with a mean of 39 fewer minutes in front of ED workstation, and 5.1 fewer logins per shift. A promising start, but is it enough to get your department to buy one for you? (abstract 256)

Full abstracts are available from Academic Emergency Medicine’s Special Meeting Supplement issue (Volume 18, May 2011) at http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1553-2712
Nicholas Genes works at Mount Sinai Medical Center in New York City, and serves on the SAEM social media committee.

 

2 Comments

  1. Your post is consistent with what I observed when I attended the NIST workshop.

    From my post about the meeting:

    [quote]The workshop focussed more on error minimization and patient safety than other possible usability goals such as speed, productivity and user satisfaction, though each of these topics were indeed represented in presentations and discussion. The following slide is from a later presentation. Notice that “critical errors that impact patient safety” and “errors and failures” are highlighted in red.[/quote]

    [img]http://chuckwebster.com/wp-content/uploads/2011/06/summative-ehr-usability-test-plan.png[/img]

    [url]http://chuckwebster.com/2011/06/usability/nist-emr-ehr-usability-workshop-a-highly-annotated-tweetstream[/url]

Leave A Reply