ADVERTISEMENT

Keeping an Eye on 72-Hour Bouncebacks

No Comments

Dear Director,
My CEO is concerned about our ED’s quality because we’ve experienced a recent increase in our 72-hour returns for admission. As an ED chairman, I know that quality is about more than one metric, so how seriously should I take this? 

Hospital administrators love stoplight-themed dashboards. These reports allow executives to view the key performance metrics of their entire hospital often in a few seconds. The ones highlighted in red become the source of concern for them and usually lead to a meeting with the medical director.

As medical directors we have the responsibility of making sure that quality medicine is practiced in our departments. We have both formal and informal feedback loops—sign out, chart reviews, complaints from hospitalists regarding care, etc… in addition to our standard quality reviews. However, hospital executives don’t see all of that and instead have to judge our quality by surrogate means. In the end, it often comes down to 72-hour returns and patient complaints.

ADVERTISEMENT

The quality of care that we practice is certainly defined by more than your return rate. Yet, we must also work within some reportable benchmarks. Every chairman I know reports to a hospital quality committee at least once a year. These reports generally allow us to define quality (what we determine to measure) as well as acceptable end points. It’s easy for the quality director to desire that the 72-hour return for admission rate is 0. It’s up to the medical director to explain why that’s not desirable – you can’t admit everyone, it would lead to unnecessary admissions, and some diseases progress in an unpredictable way. It’s also up to us to educate them about typical site averages and what levels would serve as trigger points for concern. In other words, when should that dashboard really turn from yellow to red?

Let’s get down to data. The ERs that I’ve worked in have varied in their 72-hour return rates from 3-7%. This is fairly meaningless if you’re bringing people back for wound checks, packing changes, or suture removals, particularly if you have a large uninsured MRSA population. At my last 2 EDs, while the 72-hour return rates were vastly different, the 72-hour return for admission rate, which is the more critical measure, has been approximately 1%. Talking to numerous other chairmen, I think this is a pretty typical average, though I haven’t been able to find a national benchmark.

For a big picture C-suite view, the 72-hour return for admission indicator isn’t a bad quality marker. Like your CEO, I would have concerns if this number doubled or tripled. While I wouldn’t jump to the conclusion that quality has declined, it would warrant further investigation. As with any data, sometimes you need to get into the weeds and put it into context.

ADVERTISEMENT

Some basics: Quality Improvement
Let’s start with some basics—differentiating quality improvement (QI) from quality assurance (QA). The Health Resources and Services Administration defines QI as “systematic and continuous actions that lead to measurable improvement in health care services and the health status of targeted patient groups.” QI works on the principles that multiple people are focusing on the systems and processes that impact the patient and end with clear data outcomes. For any project, you want to know the endpoints, such as the desirable goal, how you are going to measure it, and perhaps most importantly, how you are going to use the data. At their best, well-structured QI projects can improve health outcomes, improve efficiency of care, and prevent avoidable costs. An example of this might include a project that sets up home IV antibiotics for patients with cellulitis instead of admitting them. Clearly multi-disciplinary, improves health and saves dollars. Measurable outcomes might include patient satisfaction and return visit to ED or those who require admission within 7 days.

Quality Assurance
The monthly review of the 72-hour returns for admission that we all do falls more into a QA project—systematic monitoring of a program to ensure a level of quality. When we’re reporting quality of care to our hospital, we’re typically reporting the results of our QA projects. Whether we’re tracking sepsis bundle completion, deep sedation cases, or RSI, these projects should have an audit tool that follows an expected care plan with set, measurable outcomes. I’m a big picture person so I follow the KISS principle when I design audit tools—Keep It Simple Stupid. I like 4-6 yes or no questions that are based on clear outcomes and fit nicely into a spreadsheet so that we can review our success as a percentage. This works great for projects like those mentioned above, but honestly, less so in the 72-hour return for admission audit.

Into the Weeds
I’ve been a chairman for about 9 years. Over this time, my sites have cared for about 500,000 patients. With a 1% 72-hour return for admission rate, that’s about 5000 cases. While my team hasn’t reviewed every case (had my share of EMR issues and not being able to get data), we’ve reviewed a lot of them. Years back, I was given a great grading/classification system by a colleague. The reviewer needed to classify each case into a cause ranging from progression of disease, judgment error, radiology issue (misread), hospitalist related, etc…Then on a fairly regular basis, we would try to summarize the issues to the group and make recommendations based on the trends we were seeing. Early on, we found that certain types of patients with cellulitis bounced back for admission. In reality, you don’t have to be a rocket scientist to know that diabetics with cellulitis have a high chance of bouncing back if their initial visit vitals were grossly abnormal. However, on a case-by-case basis, providers didn’t realize how many came back for admission, but we picked it up on our audits. We also identified trends with our psychiatric patients bouncing back. These two trends are interesting because they’re both highlighted in Sklar’s paper “Unanticipated death after discharge home from the emergency department” (Ann Emer Med 2007). With our data in hand, we were able to educate our staff and make changes to our treatment for these patient groups.

ADVERTISEMENT

Other Trends
I’ve always seen a high percentage of patients return for admission who sign out AMA on the first visit. However, more recently I’ve seen the trend of patients who were admitted or put in observation on the initial visit, then get discharged and return within 72 hours. That’s probably due to an in-patient push to reduce LOS.

When your rate climbs and you’re being questioned about it, you need to be able to classify the causes and provide an explanation, and that only comes from knowing your data. I would clean my data up and be sure to exclude patients who leave AMA or were already admitted after the first visit. Then, I’d focus on the patients that the ED had complete control of. Over the last nine years, we’ve found a relatively small number of cases that were really a result of bad judgment. Some bounce backs have been from mental errors (not checking labs or radiology results before discharging a patient only to call them back once the results were found), but most remain as a progression of disease after an appropriate ED work up and discharge.

Getting Granular
I further break down this data to the individual provider level and this is one of the quality measures that I use for each provider’s semi-annual OPPE. In the interest of full-disclosure, I allow for a fairly broad acceptable level. On the other hand, occasionally I have a provider who is much higher than our group average and they get a much more thorough review. Keep in mind that mid-levels, or those who primarily do fast track, tend to have pretty low return rates thus the 1% average for the ED tends to be lower than any individual physician primarily working the main side of the department. The individual view may highlight someone whose rate is very low but has a high admit rate (perhaps not a positive to the hospital because of cost) or someone who does minimal, focused work ups but has a higher return rate. There are pros and cons to both; identifying areas for improvement, thus reducing some variation, is part of making sure that we provide a consistently high quality of care.

Messaging
At the end of the day, you’re still stuck with a CEO who has concerns about the quality of medicine being practiced in your ED. Your job, and your group’s contract, may depend on your answer and your actions. Telling your CEO that you are confident the quality of care is high isn’t going to cut it. Nor will telling him that the metric is BS. The best way to address him is to know the data before you meet with him. What’s causing the spike? Do you have a new ED provider (or hospitalist group) who tries to send everyone home? Is there a new nursing home down the street? Did something else change with your psych population? Knowing the context behind the data and then being ready to educate your staff about the trends is critical. Your CEO needs to know that you recognize and share his concern, that you’ve identified key issues, and that you can fix the problem if there is one. He may also need his expectations reset. After all, medicine is changing. We’re treating traditionally in-patient diseases as out-patients, and yes, some of these will bounce back.

ADVERTISEMENT


Michael Silverman, MD,
is a partner at Emergency Medicine Associates and is chairman of emergency medicine at the Virginia Hospital Center.

 

ABOUT THE AUTHOR

EXECUTIVE EDITOR Dr. Silverman is Chair of Emergency Medicine at VHC Health and a Medical Director with USACS. Previously. he taught a leadership development course for over a decade. Dr. Silverman’s practical wisdom is available in an easy-to-use reference guide, available on Amazon. Follow on X/Twitter @drmikesilverman

Leave A Reply