ADVERTISEMENT

Patient Satisfaction Surveys are Here to Stay

3 Comments

For years, the most common argument against measuring patient satisfaction was “I’m here to save lives, not make friends.” Studies finding correlations between higher patient satisfaction and enhanced hospital revenue, as well as reduced mortality, convinced many in the industry. Yet there were always those who did not care if patients were satisfied with the experience as long as they survived it.

Patient satisfaction surveys are here to stay. . . so let’s just make sure they are valid and reliable.

For years, the most common argument against measuring patient satisfaction was “I’m here to save lives, not make friends.” Studies finding correlations between higher patient satisfaction and enhanced hospital revenue, as well as reduced mortality, convinced many in the industry. Yet there were always those who did not care if patients were satisfied with the experience as long as they survived it.

ADVERTISEMENT

Today, healthcare providers’ opinions about the relative value of measuring patient satisfaction are moot: The Centers for Medicare and Medicaid Services (CMS) already uses measures of patient satisfaction to help determine hospital compensation and soon will base a portion of reimbursement on scores achieved through the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey instrument. 

While currently only inpatient care is affected, once the government implements this policy fully, other health care entities are almost certainly going to be measured as well. For example, Home Health CAHPS has just recently been implemented, and Clinician and Group CAHPS are currently in development for physician practices. In addition, emergency department experiences affect the attitudes and satisfaction of patients admitted into the hospital.  So, improving ED satisfaction will improve inpatient satisfaction, which will, in turn, affect reimbursement.

Thus, the argument by those who challenge the importance of patient satisfaction has shifted from “We don’t need to care about satisfaction” to “Satisfaction isn’t being measured properly,” and, “The data are being misused.” This paper seeks to address these concerns.

ADVERTISEMENT

Properly vetted questions
First, the survey needs to have a standard set of questions with applicability across EDs that all departments in the database will use. These core questions need to be central to the ED experience across wide swaths of the population and the EDs themselves. This allows for realistic benchmarking. In addition, an ED must have the ability to add custom questions to the survey that are pertinent to its own initiatives, challenges and opportunities. To ensure they do not alter the tenor of the survey, change the survey’s intent or confuse patients, survey design experts must vet both common and custom questions. Too often, confusing, misleading or marketing questions appear on an otherwise psychometrically sound satisfaction survey. For example, double-barreled questions such as “How would you rate how well the staff explained the patient’s care and condition?” do much to harm survey validity while adding no value to the results. Patients get confused because they do not know if they are responding to “how well the staff explained the care,” “how well staff explained the condition,” or both questions at once. 

Pilot-testing
After survey design and health care experts have properly vetted a set of standard questions, these questions need to be pilot-tested. The most important step in measuring patient satisfaction is choosing a valid and reliable survey. No amount of planning will help your ED if it is addressing issues that are not there. Pilot-testing a survey means that it has met industry standards and is an accurate measure of a person’s attitudes and beliefs.

There are many survey vendors with many theories on the proper manner of measuring satisfaction. Rather than debate whose is most correct, it is best to judge each according to standards commonly accepted across the social sciences.

ADVERTISEMENT

To do this, there needs to be a psychometrics report that details the survey design and testing. This report has two primary functions: It describes the survey reliability and the survey validity. Reliability is a statistical concept that simply means the survey has consistency; it will yield consistent results for the same perceived experiences. Validity means that the survey is properly measuring what it was intended to measure—in this case measuring satisfaction with the ED experience.

A psychometrics report does not ensure that the survey is either reliable or valid; but there is no way to determine whether it is reliable or valid without one. The psychometrics report should include at a minimum: how the survey was tested; what changes were made as a result of the testing; response rates; measures of central tendency; and how the designers tested variability, readability, reliability and validity. 

Population definition and sample selection 
The most critical aspect of assessing data is making sure it is suitable for descriptive, comparative and evaluative purposes. In most cases, this means either taking a census or some form of a random sample. Random sampling means that each patient has a known and equal chance of being selected.

The argument has been made that CMS has made random sampling impossible by insisting ED patients who are admitted to the hospital be sampled as inpatients; the argument being that not everyone is sampled. It is important to recognize that this is not a sampling issue but a definitional one. This sampling requirement is the same for all hospitals. Therefore, we still have a known population (ED patients not admitted to the hospital) that we can sample.

ADVERTISEMENT

Proper sample size
For various statistical and theoretical reasons, social researchers commonly use 30 as a cut-off for a minimum sample size. Press Ganey has adopted this as an absolute minimum sample size (for its database of small EDs), but recommends 50 as a minimum because of the large decrease in standard error with relatively low increased cost associated with obtaining 20 more surveys. Note, however, that these are both minimums. Samples based on these minimums are representative within the bounds of statistical parameters, but each ED needs to determine what sample size they are comfortable with. Sampling should be set so that ED leaders have enough data to measure the smallest unit they wish to measure and have the desired degree of confidence in that data. Also, note that for the database of large EDs, the absolute and recommended minimums are 145 and 227, respectively.

Many survey organizations, Press Ganey included, will issue reports with as few as seven returned surveys despite having a stated absolute minimum sample size. This is because it is the ED’s data and it has a right to see it. Moreover, if this data were not provided, valuable information and comments could be lost. Imagine not receiving your data due to a small sample size and then facing a lawsuit as a result of a survey or comment that was never seen. Not providing data based on small sample sizes both takes ownership of the data away from the ED and potentially masks serious issues. This said, Press Ganey is very careful to caution its clients on the interpretation of data based on small sample sizes. It will not defend data based on sample sizes less than 30, and recommends that these data be used to identify potentially serious issues and in a qualitative/informative fashion.

Trending
Regardless of how much data one examines, EDs need to examine their data needs over time. To say “this physician needs coaching” because it suddenly appeared as a priority area in one day’s (or one week’s) worth of data may lead to fixing things that are not broken. Ask yourself,  “Was this an issue last
week? Last month? Last year? Has something changed that would cause this to suddenly become an issue?” If the answer is “no,” then perhaps, it is better to ask questions than to immediately look to make changes.

Examining data over time is also a way to mitigate problems of small sample sizes. If an ED is unable to obtain a sample size it is confident with, then it should examine data over a quarter instead of a month or over six months instead of a quarter. Also, if you have a set of quarterly data across a few years, for example, and one data point stands out from the rest as “odd,” then there may be reason to be suspicious of that data.

Benchmarking
Patient satisfaction scores do not exist in a vacuum. Suppose your score is 80, 90 or 78. What is a “good” score? The best way to know is to compare your scores with others’. You need at least two representative groups that are similar to your department or what you wish it to be or both. For example, if yours is a large ED in southern Florida, comparisons with the nation overall are helpful. However, it may also be helpful to be able to compare yourself to large EDs in the entire state, large EDs in the Southeast or the second shift of large, metropolitan EDs. These types of comparisons let you know what your score represents when compared to similar organizations using similar metrics.
 
Data distribution
Are angry or dissatisfied patients over-represented? Most physicians we speak to seem to believe so.  However, a recent study of Press Ganey ED data suggests that angry or upset patients are not more likely to respond to the survey, and that the data are a good representation of the underlying population. The patient-level responses in the database have a high proportion of very good and good responses, a smaller number of fair responses and fewer poor and very poor responses.  This is exactly what we would expect if the majority of ED care in the United States was very good and very few EDs were performing poorly. 

What the data are really about
Once everything is properly in place, and you are ready to make some quality improvement decisions, you need to take a step back from the data. Consider other data sources (clinical outcomes, financial outcomes) as well as your own experience and the experiences of key stakeholders as well. Also, consider your mission as a health care professional and the mission of your organization. For example, do you believe it is better to conduct medically unnecessary tests to appease the patient or find ways to help the patient understand why a test is unnecessary? What do you do when lower acuity patients have longer waits due to one or two higher acuity patients? Do we simply try to rush everyone through, or do we properly inform the waiting patients why they are waiting (and provide constant updates), identify ways to improve the efficiency of the processes and re-examine staffing decisions?

Patient satisfaction is not merely about improving scores but also about what happens next. Examining data in a broader context and identifying creative solutions are keys to providing better care. Yes, a dissatisfied patient may survive, but they also may not come back, comply with the recommended treatment plan, recommend the hospital to others, or even worse, tell others to choose other facilities.  All of these have broad implications for the hospital’s ability to survive and continue to offer quality care.

Read EPM’s October article challenging Press Ganey’s statistical methodology

Next Month: An emergency physician group explains why they have embraced satisfacdtion surveys

 

3 Comments

  1. How do you want doctors to make decisions? Do you want doctors to defer to patient feelings, hospital policy, current medical knowledge, scientifically sound practices, financial outcomes, health outcomes, staffing needs, or something else. More importantly, how do you want decisions to be resolved where there are competing interersts? You may get what you measure and reward.

  2. Mann Bernhardt on

    Over the last decade, this movement to “give patients what they want” led by administrators with no medical background has helped fuel a nationwide addiction to prescription drugs and a record number of prescription drug overdose deaths. In my state, prescription drug overdose death is more common than traffic death. Is it ever medically right to prescribe potentially lethal drugs to patients without objective evidence of the necessity for these drugs? Is it ethical or moral? Who’s responsible if a “patient” in the ED overdoses with his cache or sells it on the street and the teenager who buys it dies? The courts may find, and have found, the prescribing doctors at fault and imposed draconian civil and criminal penalties on the doctors involved. No administrators have ever been charged or sued for drug overdose deaths. It is our responsibility to act for the patients’ best interest, after all, and prescription of these drugs to curry favor with the administration will never be ethical or moral or right.

Reply To Mann Bernhardt Cancel Reply