ѻýҕl

Homegrown Ratings Systems: Quality or Charisma?

<ѻýҕl class="mpt-content-deck">— Hospitals get into the doc rating game, but does that level the playing field?
MedpageToday

As physicians brace themselves for "consumer-driven" care, high-deductible plans, and narrow network negotiations, they increasingly fear what patients find when they look them up on Google.

Third party reviews and low star scores on sites like Healthgrades make physicians wince because they know they're much better than that.

That's why about 50 health systems and medical groups across the country, up from a handful a year ago, have launched their own online star rating programs to give a better picture of how their patients perceive their care in the doctor's office. Recent adopters include big names like Baystate Health, Spectrum Health, Medstar Health, and the Marshfield Clinic.

Taking Control of Ratings

"One of the reasons we undertook this was because ... these sites like Healthgrades and Vitals are rating doctors in a way that's not credible," said , senior vice president of Northwell Health of New York, which launched its star ratings for 1,000 doctors last August. "Anybody can write something, whether they've seen the doctor or not."

"I always felt a disadvantage because my mother is not technologically savvy," he joked. "She couldn't go on Yelp and give me a great review, one of the things that distinguishes us our process from these third parties."

In San Diego, Scripps Coastal Medical Group president , echoed that sentiment. He said Scripps Clinic Medical Group and Scripps Coastal decided to go public with their own star ratings -- becoming the second in the state next to Stanford Health -- in large part so their doctors will show up on Google searches, "on the top of the fold," above those from Vitals or Yelp.

Ostensibly, these programs appear to be a courageous move, one that could expose mediocre providers in a way that embarrasses them, hurts their bottom line and send savvy patients looking elsewhere for a provider who is respectful and takes his time rather than rude or rushed.

The scores now going public are based on a subset of responses to vendor surveys most health systems have long used internally, such as CAHPS (Consumer Assessment of Healthcare Providers and Systems) but now convert through various algorithms to a star rating. How those algorithms actually work is not always clear on the organization's website.

Marketing or Measurement?

Patients surveyed days after their physician's office visit indicate whether the doctor's friendliness and courtesy were very good, good, fair, poor, or very poor, and yes or no if they would recommend the provider. Patients indicate if their doctors were understandable, if they spent enough time listening, and how confident patients were in their care. The number of questions used to calculate stars varies from seven to 10 depending on the organization.

But for nearly all star-rated doctors nationally, variation is barely distinguishable, raising questions about whether the time-consuming effort provides useful information, or is just a slick marketing ploy that makes all doctors look pretty darn good.

Of the 406 physicians in the two Scripps medical groups in San Diego that launched star ratings, 92.1% received 4.7 to 5 stars, and those scoring lower don't look all that bad.

Pediatrician , got the lowest score, 4.1 stars, which to a consumer might look like a B+. Dermatologist , got 4.2, gastroenterologist, , got 4.4 and pediatrician , got 4.5. Another 22 doctors got 4.6.

Through the survey vendors like Press Ganey or National Research, health systems get feedback from thousands of actual patients in a year, and thus Scripps doctors get a lot more reviews than Yelp, providing that coveted top billing on a Google search. Even with Herz' 4.1 stars, he more favorably with 43 reviews at the top of the page, better than Healthgrades 3.4 stars with 10 reviews, Yelp's three-star rating with two reviews and Vitals' 3.5 stars from 17 reviews.

These public scores don't measure a doctor's diagnostic skill or whether he or she prescribed effective treatment, which should matter more to some patients. Additionally, the posted ratings usually exclude issues like staff courtesy, ease of parking, time spent waiting after a scheduled appointment, or annoying phone time spent on hold.

For most of these now-public ratings, only the patient's direct interaction with the doctor -- not the nurse or receptionist -- is what gets publicly scored, based on the argument that other components of the visit aren't under the doctor's control. Some systems, like Intermountain Healthcare, are starting to score doctors on .

Like Lake Wobegon

, chief patient experience officer for Geisinger Health, which went live with star ratings and comments for 1,300 clinicians in October, acknowledged these programs are "like Lake Wobegon, where everybody is above average."

Burke insists, however, that even though all doctors get at least four stars, the decimal points can reveal underperformers. "If I'm looking at a physician with a 4.2 star rating, I know that compared to others, that's probably at the bottom 10%," he said. "They're performing not very well compared with their peers. You just look at the score differently: 4.0-4.3 stars is a D, and 4.4-4.5 is a C."

Burke said the real value of Geisinger's program isn't to shame doctors, but that doctors absorb criticism during the run-up period, when they get to see how they fare compared with their peers. "As they were aware this was coming, that in and of itself made them more aware of their communication styles." Patients should feel they get their questions answered, "and they got feedback that wasn't the case."

Now, the program functions like a "permanent Hawthorne effect," in which knowledge that one is being watched improves performance, he said.

"Horrible Patient Care"

To better differentiate patients' experience with different doctors, most health systems are also posting patients open-ended comments patients write on their surveys.

"Those probably prompt more change than the positive (star rating) reinforcement that most physicians will see in this," Burke said.

While most sing their doctors' praises, some comments are brutal and scathing, warning other patients away.

One physician received feedback that he "was not animated, like a stiff piece of cardboard. Initially, that clinician took it a bit hard," Burke said. Another patient "(I) don't have high degree of confidence in her expertise." Now, most physicians have improved their scores dramatically, he said.

"Obviously, the reason these hospitals are doing this is because of marketing; that goes without saying," said , director of the Harvard Global Health Institute who has studied and favorably about this trend. But star ratings alone, which show little variation, are not enough.

"You have to have that narrative, those comments," he said. "Health systems that do these star ratings but then shut off the comments, that's where I say no, no, no. You're not actually being transparent. Now it's become a pure marketing tool. What you're saying is you're not actually interested in sharing with the community what's going on."

Leah Binder, president and CEO of the Leapfrog Group, which rates hospitals based on quality and safety, thinks the trend "is a good sign of progress, albeit imperfect. It tells us that providers are putting a priority on patient perspectives. Ultimately patients will learn how to differentiate among the reviews that give an 'easy A' and those that tell it like it is."

The trend of health systems taking ratings into their own hands began with in 2012.

One patient said the doctor ." Another he waited "80 minutes ... and he spent three minutes discussing my case ... Dr. Meikle should retire ... it was a joke."

At Duke Health in Durham, which in March, one had "very poor bedside manner," and "was rude, condescending, and did not want to take the time to listen to my health concerns or offer any advice about them," patients said about a provider

Andy Ibbotson, vice president and general manager of the National Research Corporation, the second largest healthcare survey vendor in the nation, says transparency programs like these are among the fastest moving trends in healthcare. Last year, "there were probably a half dozen healthcare systems implementing this type of star rating program but today there are more than 50," he said. Nearly all of them publish patients' comments, which he said is really the most valuable part of these programs.

"That's what gives you rich detail about the patient encounter and a feel to what it's like to be a patient...We've read some pretty scathing comments," he said.

SWAT teams for doctors

, UUHC CEO, the benefits of Utah's project and urges adoption across the country, especially patients' actual comments.

At a June healthcare data conference in Washington she explained how the program has helped doctors improve, some through "SWAT teams" that work not just with doctors but nurses and receptionists too. "We had physician communication training and we even expanded our free valet parking for all our patients," Lee said.

In a phone interview during which two Scripps public relations officials participated, Hirsch also said the scores have proven useful for raising physicians' awareness of how they're seen by their patients, especially for those who -- prompted by lower scores -- are forced to admit they have "a blind spot to how they were interacting with patients in the exam room."

"I will tell you the truth: you can't change the stripes on some tigers," Hirsch acknowledged. "For some, scores go up and down in a continuous process." Some have agreed to let higher scoring doctors shadow them to suggest ways to improve, and that is a strategy that works.

One with a "fairly heavy accent ... tries to talk in colloquialisms that don't fit ... and that's sometimes offensive," and thus "making a mess of her interactions with patients."

She opted for a coaching program with shadowing, "and got ideas of how to spin her personality" minimizing colloquialisms, "and her scores went up dramatically, a real success story."

Need a remote starter for the car

Among many rank and file physicians now rated with stars, reaction has been lukewarm. Before Northwell's program launched, Nash said "I used to joke that I was going to need a remote starter for my car."

In an in the Annals of Internal Medicine last November, Nash listed four reasons "Why Physicians Hate 'Patient Satisfaction' (surveys) but Shouldn't."

They say "the questions focus on superficial things like wait time," will impede doctors' duty to give bad news or challenge inappropriate requests, involves resources that are better spent on real medical care, and that patients "are simply not qualified to judge physicians." After all, doctors are treating patients, not running hotels.

"The unacknowledged truth is that providing a better experience for patients and families -- by being more attentive to their emotional and physical needs; treating them with dignity, respect, and empathy; and engaging them as trusted partners in their own care -- is real medicine, not hotel management," he wrote.

Over time, as the program rolled out, their fears and concerns mellowed, he said because of all the advance work they did.

"We started providing feedback well in advance of the public posting because we wanted doctors to see what their scores were going to look like so they'd have opportunity to make changes," he said.

It helped Nash improve his interactions as well, since "doctors tend to listen to the first three words out of a patient's mouth and jump to a conclusion or a question. So I try to be more mindful of giving people more space to talk, to curb that impulse." Additionally, Nash now "never stands in the presence of a patient. I'm always sitting to give them a sense I'm not in a rush to get out of the room."

Of course for some physicians, showing up with a better number of stars seems like a trivial nonsense and dozens of doctors inewsource tried to contact refused to respond, referring the call to Scripps administrators.

"I care so little about this I don't want to be found," said another, , a Scripps Clinic interventional cardiologist, who has not yet received his star rating.

"It's not discriminating much between providers ... and I don't think it's very valuable."

One Scripps physician who asked that he not be identified used a four-letter word to describe what he thought of it.

Another who scored exceptionally well, but did not want to be quoted by name because it would appear he was not a team player, said the star ratings are "a game they want us to play. The patients often use shallow measures to judge a doctor's medical skill."

One patient scored her doctor poorly because she wore a cocktail dress while another was angry the doctor said she drank too much, he said.

, who received 4.7 stars, said tersely, "I'm not interested; I'm retiring in 2 months." Several dozen other doctors did not respond to calls for their opinion on the rating system and some receptionists said they were not aware their physicians were being scored.

Care about it or not, national experts say such public scores are not going away.

Even if the driver is marketing, that's not a bad thing, noted Jha, "if it leads physicians to look at why they're performing badly and make changes, to communicate better and be more respectful. Maybe they get a coach or have shadowing. We want doctors to clean up their act, and if this is what's motivating them, great."