We will be performing site maintenance on our learning platform at learn.aaos.org on Sunday, January 29th at 12 AM EST. The site will be down for up to 5 hours. We apologize for the inconvenience.

AAOS Now

Published 10/1/2010
|
Terry Stanton

What’s the measure of physician quality?

Study finds quality ranking tiers are inconsistent, unjustified

You’re a top-tier physician according to one system, but ranked solidly in the middle by another. What makes the difference—and who’s doing the deciding?

According to a study published in the Journal of Bone and Joint Surgery, there’s little agreement among insurance plans about which surgeons are top-tier. The analysis of three insurance plan rating systems in Massachusetts found that only 5.2 percent of surgeons were rated top-tier by all three health plans, and the only factor independently associated with the top tier in all plans was a suburban office location.

“No data indicate that higher-quality care is delivered in suburban areas or that lower-quality care is delivered in urban areas,” countered the study authors. “We hypothesize that the clustering of top-tier physicians into suburban areas represents the failing of tiering methodologies to adequately adjust for increased severity and complexity of patients who seek care at urban academic medical centers.”

To explore the highlights of the study’s findings, AAOS Now spoke to one of the study’s authors, Timothy Bhattacharyya, MD, who previously practiced in Massachusetts.

AAOS Now: What prompted the study?

Dr. Bhattacharyya: About 5 years ago, health plans said they were going to start ranking physicians. When I got the letter specifying my tier and ranking, I didn’t understand it, and I wanted to know more.

We know it’s difficult to measure things, especially in clinical medicine. I wanted to ask, “How reliable are these measurements?” It was a 3-year effort to understand how these three particular health plans ranked physicians and to try to assess their validity and reliability. They were basically using the same databases and the same computer programs to classify all orthopaedic surgeons. We thought they all should agree who was top tier and who was bottom tier. But they don’t agree at all.

So then we asked what kind of characteristics can we discern about the physicians who were ranked in the top tier? We found that doctors identified as being high-quality didn’t practice in urban or rural areas but in suburban areas. They tended to be average-volume physicians, not high-volume. They tended to accept Medicaid and to be board-certified.

The general conclusions and take-home points are that insurance companies don’t agree on who is top, middle, or bottom tier, and that 70 percent to 80 percent of practicing orthopaedic surgeons are ranked in the bottom tier, even though they’re doing a good job every day.

Tiering systems are based on claims and billing data, which we all know is a fuzzy view into quality of care. That’s one reason tiering systems are unreliable.

AAOS Now: What kind of data would put a physician in the top tier?

Dr. Bhattacharyya: The insurance companies were not forthcoming with transparent algorithms on what constituted a top-tier physician. They offered very vague guidelines.

For example, one company used an algorithm that measured whether a patient with a distal radius fracture was sent for a bone scan within 6 months. That’s not a great quality metric, because if the patient is 95 years old, there is no good clinical reason to send that patient for a bone scan because his or her osteoporosis management isn’t going to change. It also doesn’t recognize that the patient may have had a bone scan within the past 6 months ago, so doesn’t need another one.

AAOS Now: What do these tiering systems mean for patients?

Dr. Bhattacharyya: I don’t think that tiering systems have a big effect on patients. Most patients seek referrals to an orthopaedic surgeon from their primary care physician or from friends and family. At the time of this study, patients who saw a physician in the lower tiers would have a nominally higher ($10) co-payment.

In theory, patients could go to their insurance company’s Web site to find the high-quality physicians in their area. A patient in health plan A may see that his or her doctor is not in the top tier and get upset. But that patient doesn’t have access to company B and, according to our data, there’s a 60 percent chance that that doctor is going to be ranked in the top tier for company B.

Both physicians and patients see these tiering data in isolation. Our study provides a bird’s eye view of what the tiering systems look like.

AAOS Now: Do these tiering systems have an impact on patient selection of an orthopaedic surgeon?

Dr. Bhattacharyya: I don’t think these tiering systems have an impact on choice of surgeon. Most patients seek a referral either from their primary care physician or friends and family. Currently, patients don’t choose their physicians by first going to their insurance company’s Web site.

AAOS Now: You gave the example of surgeons getting marked down for not ordering a test. Would surgeons be penalized for ordering a test?

Dr. Bhattacharyya: The back pain measure includes how many patients with back pain are sent for magnetic resonance imaging (MRI). But these measures are designed for primary care providers, not specialists. Orthopaedic surgeons have clear guidelines for when patients with low back pain need MRIs. But there’s no way to communicate to the insurance company that you are meeting these indications.

AAOS Now: How do you account for the quirks in the tiering, such as the correlation volume and top tier?

Dr. Bhattacharyya: With joint replacement, it’s clear that volume and quality are related. But it turns out that volume and tier are very much unrelated. The trend was that physicians in the higher tier tended to have lower volumes, which is the opposite of what you would want.

AAOS Now: What accounts for a plan that has no one in the top tier?

Dr. Bhattacharyya: Some plans are very restrictive in assigning people to the top tier, and others are more liberal. We can’t really discern why that is. For the three-tier plan in Massachusetts, no one met both the cost and the quality thresholds for the top tier. But because physicians only found out about their personal status, everyone felt badly about being in the middle or bottom tier; they didn’t know the top tier was empty.

AAOS Now: What criteria might do a better job in classifying physicians?

Dr. Bhattacharyya: I think that physicians in each specialty need to take a bigger role in developing quality measures that make sense to doctors and to patients. Everybody in the system—physicians, patients, payors—is asking for this. Physician leadership is essential.

Measures such as unplanned return to surgery within 30 days, which is just a rate, are probably better than all-or-none measures. Measures of outcome are much better than measures of process.

Because current measures are unreliable, orthopaedic surgeons need to participate in developing these quality measures. We want to have a role in shaping the future. One of the reasons we published this study is, if payors want to tie compensation to tier, we now have the data to say, “Well, this how it worked out in Massachusetts. Let’s come up with a better plan.”

Dr. Bhattacharyya’s coauthors for “Physician Tiering by Health Plans in Massachusetts” (J Bone Joint Surg Am, Sep 2010;92:2204–2209) are Ajay D. Wadgaonkar, BS, and Eric C. Schneider, MD, MSc.

Disclosure information: Dr. Bhattacharyya reported no conflicts.

Terry Stanton is the senior science writer for AAOS Now. He can be reached at tstanton@aaos.now

Medical societies respond to tiering programs
In July 2010, the American Medical Association (AMA) sent letters to the largest U.S. health insurance companies asking for “immediate action to improve the accuracy, reliability, and transparency of physician ratings.”

The letters were cosigned by 46 state medical societies and called on each health insurer to “publicly document the accuracy of their physician cost profiles by submitting the programs for external review by unbiased, qualified experts.”

In the letters, the AMA cited a study by the Rand Corporation that shows that physician ratings conducted by health insurers can be wrong up to two thirds of the time for some groups of physicians. Under the best circumstances, insurers misclassified one fourth of all physicians.

To the courts
In 2008, the Massachusetts Medical Society (MMS), along with five physicians, filed a lawsuit against the state Group Insurance Commission (GIC) contending that the GIC’s tiering program—known as the Clinical Performance Improvement (CPI) initiative—“defrauded the public” and “defamed physicians whose performance was improperly rated because of faulty data.”

The lawsuit, which seeks to halt the tiering program until it can be modified, remains pending in Massachusetts Superior Court.

In September 2010, the California Medical Association (CMA) entered the legal fray with a suit against Blue Shield of California alleging that the health insurer’s new online physician rating system is inaccurate and misleading to consumers. The recently launched program awards “blue ribbons” to physicians who, according to the company, have met national quality standards.

The CMA and physician plaintiffs said the system does not give physicians a way to correct errors in the rating system and that it has the effect of funneling patients to less expensive physicians. Blue Shield contends that cost data were not used in the rating process.

According to CMA President J. Brennan Cassidy, MD, the rating system is “both misleading the public and potentially damaging the reputations of thousands of doctors. The art and science of medicine is complicated, and any ratings system should reflect that complexity.”