Everything you need to know about the newest hospital rating system.
Dear Director: My hospital CEO just told me that we received a three star quality rating from CMS and that the ED needs to improve its performance. What is he talking about?
Welcome to the newest way for consumers to get a summary rating of your hospital’s quality performance based on existing publicly reported quality data. When the Affordable Care Act (or Obamacare) was enacted, one of the requirements was that hospitals have publicly reported quality data. In part, this was designed to make hospital performance more transparent, hospitals more accountable, and to educate the consumers. I can’t look up a restaurant or a hotel without their star rating popping up in my internet search so it doesn’t surprise me that a star rating system is being used in healthcare. In fact, Medicare star ratings first appeared in December 2008 on Nursing Home Compare. About a year ago, hospitals received star ratings based on their HCAHPS. I am not endorsing the star rating system but it is important for at least medical directors, and maybe all docs who work in the hospital, to understand what they mean and how they were created. Certainly, our C-suite will be paying attention to the ranking and how the hospital compares to its competition.
Background
The Overall Hospital Quality Star Rating is designed to ease the burden on the consumer and provide a summary rating by CMS. It does not include any new quality measures and will not be replacing the existing reporting mechanism on any individual quality measure. The new rating system took over two years to develop and included substantial stakeholder input. The goal was to produce a statistically sound method to convert the 113 quality measures for which data is currently available into a system that assigns the hospital between 1 and 5 stars for its overall quality performance. In effort to align this program with existing CMS programs, the groupings used in this program mimic those seen on Hospital Compare and the weighting of the various quality measures is similar to those seen in the Value Based Purchasing Program.
The Process
In an effort to be transparent, CMS recently published the methodology for the quality star rating program in “Overall Hospital Quality Star Ratings on Hospital Compare April 2016 Methodology and Specifications Report [1].” There is a five-step process in assigning a star rating (see figure below). It starts with measurement selection, grouping the measures into seven categories, calculating a group score, and generating a summary score before assigning a specific star level (which is based on cluster analysis). There were 113 measures that were eligible for inclusion as of April. After excluding certain measures (those retired, have <100 hospitals participating, duplicative, etc…) the final number of measures being included is 62.
* The H’s represent the proportion of the hospitals at each star rating (Click the image to enlarge)
The seven groups, or domains, are listed in Figure A. After all these years of us saying the ED is the front door to the hospital, it turns out we have a pretty broad impact in many of these groups. We probably have less impact in the mortality and safety of care groups though we may be involved in these cases. For safety of care, think CLABSI and CAUTI. We may not want to admit our role in the readmission group, but we are sometimes the decision makers regarding a potential 30-day readmission. The patient experience group is currently composed of inpatient HCAHPS scores although The Studer Group has certainly presented the relationship between ED patient satisfaction scores and inpatient HCAHPS scores. Left Without Been Seen (OP-22), Head CT results <45 minutes for potential t-PA patients (OP-23), and Stroke Thrombolytic Rate for potential t-PA patients (STK-4) are contained in the Effectiveness of Care Grouping. We have less input into the efficient use of medical imaging, but the ED owns the entire Timeliness of Care domain. All of the ED throughput metrics are in this group (ED-1, ED-2, OP-3, OP-18b, OP-20) as well as time to pain medication in long bone fracture (OP-21) and time to EKG (OP-3).
Just like we always wanted to be recognized as the front door of the hospital, we’ve also asked for help reducing ED admission times. The timeliness of care metric puts this front and center for hospital administration as the two most heavily weighted components are ED 1b (median times for length of stay of admitted patients) and ED 2b (admit decision time to ED departure for admitted patients).
Another important feature of the star system methodology is the unequal weightings of the components of each group. For the math nerds among us—my wife and daughter helped me with this one—the score is calculated by applying a complicated statistical technique called latent variable model (LVM). The model assumes “that there is an unmeasured, unobserved dimension of quality for each hospital that is reflected in its measure performance [2].” Each measure within a group is assigned a “load coefficient” which reflects the degree of the influence of a particular measure on the overall group score. The load coefficients are the same for all hospitals, however they change with each new set of data released. The importance of the loading measure is that it lets us know which metrics are the most important in each group. As it turns out, for this year, for the Timeliness of Care domain, it’s the ED 1b and 2b which are the most important, and those are also the metrics where we are most reliant on factors outside of the ED.
The last step is calculating the total score, but the seven domains are not all weighted equally. Four of them are weighted at 22% each and the other three each carry a weight of 4% (see table below). The thinking behind this was that certain groups (mortality, safety, and readmission and patient experience) should be greater than that of process groups (effectiveness and timeliness). The weighting is similar to that seen in the Value Based Purchasing Program. Perhaps the good news for us as emergency medicine leaders within the hospital is that the Timeliness of Care domain only represents 4% of the total score.
Once the score is calculated for each hospital, hospitals are then grouped via k-means clustering. The purpose of this is to “organize hospitals into one of five categories such that a hospital’s summary score is “more like” that of the other hospitals in the same category and “less like” the summary scores of hospitals in other categories [1].” This creates a broad distribution. It turns out that of the 4604 hospitals that will be assigned a star rating this month, only 87 (2.39%) will have five stars. 22.51% get four stars, 51.58% get three stars, 19.63% get two stars and 3.89% with one star, thus creating a relative bell-shaped curve.
How Will The Star Rating Be Used?
There are a lot of things that drive patients to a particular hospital but I’m not convinced that patients are using the Hospital Compare website that often to compare quality between hospitals. I’m not even convinced that people are looking at websites that provide ED wait times. However, something like a star system will make it much easier for patients to quickly judge a hospital’s quality and this may drive some of their decision making. Hospitals that score well will definitely look to promote their results in advertising. From a business perspective, it’s likely that hospitals will try to negotiate with insurance companies to become preferred providers if the hospital can demonstrate higher quality and perhaps be a better value to the consumer. Although I haven’t quite figured out how they will calculate the score, I’m hearing rumors that CMS would like to assign a star rating to individual and group providers as well. Stay tuned for more about that in the future.
Next Steps
Think of the star rating as a summary of all the things we’ve been working on for years. It’s also a roadmap of where to focus your administrative efforts. The goal of any manager is to align their efforts in the same direction as the company or CEO. As a first step, I would sit down with your CMO or director of quality and get the specific results and scores that composed your hospital’s star rating. If you don’t already have an ED dashboard that encompasses all of the pieces that are reported, now’s the time to create one. I have no doubt that whomever you meet with regularly in the C-suite will be asking you about the ED’s performance and ultimate contribution to the hospital’s score. You’ll certainly need to be knowledgeable about the components and should be ready to discuss a game plan for areas where your ED is underperforming. You can prioritize your efforts based on the issues that are receiving the greatest weight.
Conclusions
Like it or not (I think only a minority of CEOs will be happy with their star rating and many physicians will chafe at the ED metrics being used) CMS has started rating your hospital on a five star scale. All of the ED metrics used in the star rating system are typically followed by the ED medical director and nursing leaders, but now you may notice hospital administration is more interested in your performance. The good news is that with added visibility, hospital admin may provide more support in helping to improve performance. We need to know which of our metrics are involved and how we’re contributing to the hospital’s score. Because some metrics are weighted more than others, we should align with the hospital to understand the administrative priorities of the hospital and begin tackling those issues one at a time.
REFERENCES
- https://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1228775183434
- AHA Quality Advisory, January 27, 2016; http://www.aha.org/advocacy-issues/tools-resources/advisory/2016/160127-quality-adv.pdf
1 Comment
Well explained, Mike. Yes, patients and payers demand value more than ever. I see that our primary role is to close knowledge gaps in perceived quality (AKA satisfaction) and navigate patients to true quality in the least costly way possible.