Dear Director: We’ve never shared metric data among the providers at my site but I’m thinking about doing it as a way to improve our metrics. How transparent should I be with the group?
Often, I think docs are unaware of their own metrics, let alone how they compare to their colleagues. Emergency physicians are a competitive bunch and no one likes to be at the bottom of a list. Transparency of metrics encourages efforts to improve performance. I work for a very transparent group, so over the years I’ve become accustomed to working with providers who were either uncomfortable having their metrics displayed or wanted help to improve their performance.
Transparency Starts At The Top
If transparency is part of the culture, it needs to be throughout the culture. This means starting at the top from a management perspective. Vision and mission need to be discussed and management decisions should also be transparent by explaining the ‘why’ (check out www.startwithwhy.com). It’s one thing to mandate that length of stay needs to decrease by 30 minutes, it’s an entirely different conversation when it’s done in the context of increasing capacity to improve patient safety. When docs understand the impact of long LOS, such as worse outcomes with MI and sepsis, then it’s easier to get them on board to reduce their own LOS. Unless you’re getting 12 new ED treatment rooms and the additional 12.6 RN FTEs to staff them 24/7, the best way to take care of the waiting patients is likely to create space by reducing LOS. Conveying the ‘why’ behind ED metrics is critical to physician commitment to improving their performance.
About 10 years ago, I took over as chairman in a group that didn’t share data. Our metrics weren’t good. Everyone thought they were the busiest doc and cranking through so many patients, yet they were completely unaware of their numbers. No one knew what their average patients/hour was (or even how many patients they av-eraged a shift). The patient per hour range was from actually 1.4 to 2.2. I understand that some docs are faster while others are slower, but I felt like we needed to narrow the gap and set some expectations for performance.
Before we could get to transparency, we had to get to awareness. I spent the first few months talking about data in the broadest sense, mostly focusing on the average of the group and comparing that to other averages I could find (the company average and goal, other ERs my friends worked at, etc…). After a few months, I created a bar graph that showed everyone’s performance. This was blinded and I handed out each doc’s unique code (a letter from A to M) so they could find themselves on the graph. After a couple months of people not remembering their code, and me getting tired of giving out little pieces of paper with a letter on it, I took the blinders off the graphs.
Shockingly, no one cried or ran away because of public humiliation. What we all realized is that everyone already knew who was fast and who was slow. The graph just gave us the knowledge as to exactly how far we were from our goal LOS or from the fastest in the group. Whether you’re pulling back the curtain publicly or privately, remember one caveat: it’s important to make sure that data feedback is perceived as an education opportunity and not just a chance to criticize or pass judgment.
There’s usually 1–2 people who like being the fastest and may take a little competitive enjoyment out of being at the top. That’s great, as long as it doesn’t compromise other metrics or quality of care. What I’ve never seen in 15+ years of publicly sharing data is having the most productive person regress to the mean. That would be a concern if a doc felt like they were bearing too much of the burden and decided to take that extra coffee break and slow down. You can speed up slower docs but it’s rare to see fast docs lose a lot of productivity. Most sites have some sort of productivity based pay and docs seeing more patients enjoy more compensation.
In some sites, you may consciously want to raise the patients/hour bar for everyone. More often, for a productive ER, it’s about bringing the least productive docs up to a reasonable level. Metric transparency lets people see how far off the mark they are. Competitive juices will typically garner some improvement. But our job as a manager doesn’t end with just showing them the graph. Now we have to coach, mentor, and educate the doc about how to get to the next level.
How Far To Go
If it’s not already clear, I’m a big fan of transparent data. I’m also a pretty big data geek and our hospital subscribes to a business intelligence service that gives me more data than I could ever use. So one of the questions has to be how far we’re willing to go with not only publicly sharing data, but also using it wisely.
The big picture stuff that would be part of an annual evaluation and is critical to hospital metrics is a given. This includes individual metrics for patients/hour, RVUs/hr, LOS, and patient satisfaction (if you’re able to get it). Door-to-doc time is a core measure and is critical to the hospital, particularly if they post it on a website or a billboard. However, if your wait time is much longer before the night shift starts than the day shift, it might not be fair to your full time nights doc. A better metric to study would be bed to doc. This is a cleaner number, allowing for better comparisons, and making it easier for docs to own.
If you’re not already looking at CT utilization, I can assure you this is coming. While I think there’s value in knowing the group average and how each provider compares to that, I have a hard time telling docs they need to lower their rates. When I’ve had those conversations in the past, I’ve perceived an increase in the number of diagnostic misses. In my mind, that’s because people may have been reluctant to order a CT if they felt like they’ve used their daily allotment already. CT usage is widely variable by site and patient mix, but generally EDs range from 20–25 CTs/100 patients. There is, however, no standard for the “right” number; therefore, setting a group goal in this circumstance probably does not make sense.
On the other hand, it is difficulty to explain why there is one doc in your group who orders CT scans at rates twice that of another doc. In the case of CT utilization there is probably some value in your providers at least being aware of how they compare to their peers. I have heard anecdotal reports of sites reducing their overall rate of CT utilization by 10% just by providing data transparency. But my concern with publishing and focusing on these rates is making sure we reduce utilization without any added risk to patients. Appropriate CT usage lends itself better to a quality improvement project where CTs are reviewed in the setting of a particular diagnosis, such as closed head injury or chest pain to rule out PE.
Once you start to make transparency a priority, it’s important to let new docs you’re interviewing or hiring know about your culture. There are definitely people who desire to join a group with metric transparency and will thrive, while there are others who may shy away from this environment. When I interview docs, I set expectations about the job and discuss important cultural aspects of our department. This is one that needs to be included if you operate this way.
Medical directors are generally under pressure to improve some aspect of the department’s performance. While we always have to keep the overall operation of the department in mind, we are also at the mercy of how individuals perform. We can’t control every aspect of the ED visit. But things that we can control and things we can measure, we need to take advantage of. Start with awareness of the key metrics and then add in transparency. Be sure to establish goals and work with your under-performers on improvement strategies. As I’ve heard ACEP President Jay Kaplan say many times, “If you want to improve performance, publish your data. If you want to improve it faster, be transparent and publish names with the data.”