Q&A: The Difference Between Patient Satisfaction and Quality

By Brian Ward

Editor’s note: CMS, Joint Commission, hospital organizations, and private vendors each have their own way of calculating a hospital’s ranking and to improve patient care. While hospitals are already expected to conduct certain surveys by various agencies, how often should they conduct their own research? What measures should they use? And how do you use the information you have to drive improvement? Two definitions that are used often in conjunction with a hospital’s merit are patient satisfaction and quality of care. The following is an edited Q&A with Craig Deao, senior leader at Studer Group, on the differences between the two and how they can be used to drive improvement.

Q: How do you differentiate between patients’ satisfaction with their care and the quality of the care they received?
Deao: The definitions of what you’re trying to measure are very different things. The healthcare industry about 30 years ago really started looking into satisfaction, which is “what does the organization do for you? How satisfied are you with that?” Plus, the experience and a lot of service hygiene factors.

The quality of care traditionally has been measured by the healthcare industry’s view of what makes for good outcomes and good process measures to predict those outcomes. [Examples are] process of care measures such as “Did you get your aspirin on time?” and outcome measures like morbidity and mortality.

But nobody gets to define quality without the voice of the customer. The more contemporary measures looking at patient experiences are really the experiences of quality. I think if you look at the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) series of surveys, those were survey questions that CMS wanted AHRQ (Agency for Healthcare Research and Quality) to help them determine over a decade ago so that they could measure a patient’s view into quality.

These surveys are getting at whether patients can accurately rate things that correlate with quality. The HCAHPS series of surveys are not satisfaction surveys; they’re frequency surveys. They don’t ask if your care is very “excellent” or “poor” or “good.” They’re asking if you “never,” “sometimes,” “usually,” or “always” saw some evidence-based practice at the bedside.

The only measures we had in the early days were satisfaction measures, which remain good, important, vital measures. And we’ve now added measures on how patients perceive the quality of care. And the best way to assess that is how frequently they saw specific evidence-based things happening at the bedside, like having their pain level controlled or understanding side effects of medications. Those aren’t satisfaction questions; those are quality questions.

Q: Is one more important than the other?
Deao: I wouldn’t say that one is more important, it’s really that they’re measuring very different things. And they correlate.

There’s a number of studies that support the idea that while patients may not understand the technical aspects of care, their perception of quality (what they see, hear, and experience) [is] really accurate. In fact, there’s been several systematic reviews of the literature that conclude that patient experience data is positively correlated to clinical effectiveness, safety, and makes a pretty strong case that patients can accurately define quality. It’s not a different thing.

So it’s not that one is more important than the other, it’s that they’re related, they’re valid, and they’re both important measures.

Q: Aside from mandated surveys like HCAHPS, how often should facilities conduct a patient satisfaction and quality-of-care survey with patients? What are the benefits of these surveys?
Deao: I think you first have to look at what you’re trying to answer by doing those other surveys. And there’s a range of those:
•    How do we compare relative to others so we can decide where to invest resources?
•    What’s the improvement we’re trying to make for the organization?
•    How do we improve care for the patients in front of us?
•    How do we predict where we should invest our market share and loyalty gain?

Those are all pretty different questions to answer. I think too often healthcare organizations are looking for one type of question or one type of survey to answer all those things.

I think in the future you’re going to see organizations make much more selective choices about which questions they should be asking patients based on those outcomes. And that will drive the frequency.

With mandated surveys, the challenge with that data is that it lags; it’s not generally a very high sample size. It’s excellent methodology and mode, and there are controls for bias; it’s very accurate data. But it’s not timely and doesn’t have nearly the power in terms of sample size that you’d want if you’re using it for performance improvement alone. There are all kinds of survey modes, such as collecting through iPads or text message. While each has limitations, they are each helpful at answering different questions.

So I think the answer to your question, “How frequently should you survey?” is you first have to answer, “What is it you’re trying to answer?”
Q: What are common mistakes/issues hospitals have when conducting these surveys?
Deao: I think too many organizations have simply let regulatory requirements determine which questions they’re going to ask. And while I really support those (I think HCAHPS and those related surveys are excellent), if that’s all you’re measuring, you’re saying that you’re only looking for the answers to the questions that survey asks.

As examples, those mandated surveys aren’t market share surveys; nor are they deep enough to understand where to make precision improvements. If you’re just doing the bare minimum of meeting the mandate, rather than identifying what you need to know about or hear from your customers and go collecting that data, I think that’s the biggest mistake I see.

A close second though is overpaying for “precision” when all you really need is “accuracy.” Accuracy is essentially matching the reality of the sentiment of the customers we’ve seen. Precision is “to that decimal point, is it accurate?” and there’s a lot of organizations that are paying a whole lot of money for precision when their performance is actually quite poor.

They already know based on [their measurements] how to improve. [For example,] consider you’re in the bottom quartile, [based on] the data you are receiving. But you’re not doing the things that evidence suggests would put you in the top quartile. There’s really not a whole lot more need to continue measuring with precision until you do those evidence-based things because the data will simply continue to tell you, with precision, that you’re not doing those things. Simply getting on the scale again doesn’t help you lose weight.

There are a whole lot of organizations who’ve invested a whole lot into the precision of knowing exactly “how bad I am” vs. knowing “where I am and what I need to do.” Most would be much better off investing time and resources in the improvement, not just the measurements for a bit.
When you start getting really good and doing the things evidence suggests you ought to do, then you really ought to invest in more precise data. So accuracy is “is it roughly correct or not?’ and precision is more “narrowly defining the fine points of accuracy.”

Q: What are the key metrics for measuring care quality?
Deao: There are some good sources for that, and there’s no need to reinvent the wheel on what good care quality measures are. The National Quality Forum does an excellent job with this. CMS has a very methodical process for selecting what measures for reporting and incentivizing. And I think the medical specialties also do a nice job. So there’s no shortage of measures to look at every dimension of quality.

I think that organizations need to start by looking at those endorsed measures and figure out which of them they need to track to best help them achieve their mission. Too many organizations I think don’t do that last step of deciding, “let’s narrow down the thousands of possible measures to the handful that are really most relevant to me.” But you shouldn’t start with a blank piece of paper and become “terminally unique” in inventing your own set of quality measures. They’re already out there and it’s a whole lot easier to make improvement and have benchmark data if you choose from the nationally accepted data sets.

Q: What are the key metrics for measuring patient satisfaction?
Deao: It depends on what you’re really measuring. If it’s satisfaction, there’s really not one national accepted set of measures for satisfaction. Those have really been driven by proprietary vendor measures that each ask slightly different questions on slightly different scales and how they combine all those responses together is how they determine satisfaction. So that’s probably the least standardized of the things you can ask patients.

When it comes to asking patients about their care experience as a correlate of quality, that’s probably most defined and that’s the HCAHPS series of surveys. And then, if you’re looking at market share I think the data is pretty clear that net promoter scores [are the best method].

There’s a lot of data coming out of Fred Reichheld’s work, [saying] essentially the only question you need to ask customers that helps predict whether you are going to gain or lose market share in the future, is you ask them the “likely to recommend” question. That is “how likely are you to recommend this ‘blank’ to friends and family,” and you ask it on a 0-10 scale.

The term “net promoter score” (NPS) comes about because “9s” and “10s” are the promoters: folks who are going to tell other people about your product or service. The 0-6s are the detractors, they’re going to tell people not to use your service. And the “7s” and “8s” are neutral, they aren’t going to say anything either way.

So the NPS is a methodology where you take your 9s and 1’s, and from that subtract your 0-6s and that becomes your net promoter score. It’s a benchmark survey that allows you to compare customer loyalty across industries, not just one healthcare organization to another. And if somebody asks me, “What’s the valid measure set that we should look to predict market share growth-oriented goals?” you certainly want to look at that “likely to recommend” question and net promoter score methodology.

Q: How do you ensure that you are getting useful data from your surveys? And how do you act upon that data once you do?
Deao: This is where it’s important to understand the notion of a feedback loop. Because what we’re really trying to do in this data collection is to collect data that helps people make better decisions. When it comes down to the individual level, feedback loops are important.

Wired magazine had a really interesting article a few years ago looking at interventions that reduced speeding on the highway. They looked at the speed limit (a 35 mph sign) as Option A. Option B is a sign that says “Speed limit is 35 mph but you’re going 42” and it puts it up there and flashes at you. And Option C was a law enforcement officer with a radar gun.

Turns out when you look at the intervention that reduces speeding by the most mph and has the longest effect after the intervention, it’s Option B. When you look at why that is, you hear from neurologists who’d tell you, “Well, that’s because it creates a feedback loop.”

The way that feedback happens is that you get feedback that has a few key attributes; it has to be timely, relevant, and creditable data. And when I change my behavior, I see how it changes the data. So I take my foot off the gas and I see that I come into alignment with that norm, then I accept that the speed limit is 35. And it triggers in your brain an “Atta’ boy, nice job.” That hardwires intrinsically that that’s good behavior and so it actually causes that effective speed correction to last the longest compared to the other three.

I thought that was crazy when I read it, because I know what happens when the law officer points the gun at me, I slam the brakes. But then once they point the radar at the car behind me, I speed right back up, because it’s not really intrinsically hardwired.

That’s what we’re trying to do with these feedback loops; how do you get data that is timely, relevant, and creditable and that when you do an intervention in your organization you see the effect on the data changing, good or bad?

I think that when people start viewing how they’re collecting information from patients against that model, they start seeing gaps. Consider timeliness: if this information is from patients you saw nine months ago, that’s not very helpful to make improvement.

Is it creditable data? Well, it’s a sample size of three and I saw 4,000 patients in that period, [so] that’s not really good creditability.

Is this compared to something that I care about? For example, if you’re collecting physician-specific data and comments, and you’re comparing my results to all the physicians in the organization that aren’t in my specialty, I’m going to say that’s not very relevant data to my patient population.

There’s a multipoint test that I think organizations have to look at. But at the end of the day the test is if the data are being used for insight that you can actually make improvements upon.

The old ladder that I learned about in information is that you go from data to information to knowledge to wisdom. When you complete the survey and get the spreadsheet back, you have data and information. As you start interpreting and understanding what it’s saying, you have knowledge. And then when you start understanding what you can improve and gain mastery through experience, it becomes wisdom. And too many organizations start on the data side and don’t have a way to turn that into the wisdom that comes from a feedback loop.

This article originally appeared in Briefings on Accreditation & Quality.