The explosive growth in the number of costly orthopaedic procedures—including joint replacements, back surgeries, and other treatments—has attracted the attention of legislators, payers, employers, and experts in both health care and economics. Without scientific evidence to support the effectiveness of these procedures, many of these critics are beginning to ask questions. They want to know the following:
- Are these costly procedures effective?
- Are other, less costly treatment options just as effective?
- Despite their effectiveness, are they right for every patient?
To help identify the best approaches for answering these questions, medical/surgical specialists, researchers, and government representatives convened in Washington, D.C., in May for a Comparative Effectiveness Research (CER) Symposium cosponsored by the AAOS and the Orthopaedic Research Society. Participants heard from a number of noteworthy speakers on the professional and economic obligation of physicians to participate in the comparative effectiveness conversation.
“Orthopaedic surgeons have opportunities to improve quality, but we need to provide evidence that will change practice behavior,” said Kristy L. Weber, MD, chair of the AAOS Council on Research and Quality, who cochaired the symposium. “I’m proud that the AAOS is contributing to these efforts by focusing on CER so that our members can be sure they are providing patients with optimal care.”
What is unique about CER?
According to the “father” of CER, Harold Sox, MD, MACP, a main driver of healthcare costs is the uncertainty of effectiveness. This uncertainty is a product of research studies that only examine the benefit or harm of a single treatment or device, such as one type of cervical disk implant, in a controlled population.
Dr. Sox, professor of medicine at The Dartmouth Institute at Dartmouth Medical School, says that the results of these studies are often not conducive to physicians’ practices because the findings apply only to a small number of patients and because the treatment being examined has not been compared to alternatives that may prove more beneficial.
CER offers a new, patient-centered paradigm that tailors subject matter and results to the needs of physicians and their patients to help them make optimal treatment decisions. CER is a direct, head-to-head comparison of a variety of tests, treatments, and prevention strategies on targeted populations.
CER seeks evidence that is responsive to individual patient preferences and needs as a means to measure the value of a treatment. The evidence gathered through CER enables physicians to look at the “big picture”; for instance, they need to look at not just whether a disk replacement device is safe and effective, but whether an individual should undergo total disk replacement or spinal fusion based on his or her characteristics and desired outcomes.
Symposium presenters discussed additional approaches to conducting CER to supplement the high quality data gathered from randomized controlled trials (RCTs) (Fig. 1).
The clinical prediction model
Most RCTs are designed to answer a specific question that applies to the “average patient.”According to Adam Pearson, MD, MS, of Jefferson University Hospital, “This poses a problem because everyone knows that, in the clinical setting, every patient has a unique set of characteristics that may modify the effect of the treatment.” Lack of patient diversity in trials may cause physicians to make efficacy judgments based on generalized data.
Clinical prediction models have been developed to go beyond trial results to help clinicians and patients make optimal treatment decisions by evaluating a multitude of variables as potential predictors of a treatment’s effect. According to Dr. Pearson, enabling decision-makers to see treatment outcomes from a variety of patient types can prevent extraneous surgical procedures or avoid prolonged, ineffective nonsurgical treatment.
For example, two characteristically different patients might both qualify for spinal stenosis surgery in an RCT. Use of a clinical prediction model, however, might show that surgery would yield a better outcome in only one of the two patients.
Adaptive designs
Many RCTs conducted to predict and evaluate the safety and efficacy of a device or product lack flexibility in their designs. For instance, even if a treatment outcome is identified as toxic, the trial cannot be altered and may cause unnecessary harm to its subjects. Adaptive design methods give investigators the flexibility to stop a trial early due to safety or efficacy concerns and intervene to re-estimate the sample size, modify the inclusion criteria, or alter the dosage without compromising the integrity of the study.
Adaptive design methods are attractive because they enable researchers to use more variables at the beginning and then exclude those that are excessively harmful or ineffective. According to Steven Goodman, MD, MHS, PhD, of Johns Hopkins University, adaptive design allows for smaller, smarter trials by making it possible to alter a study upon accruing more information.
Although adaptive designs may be useful in keeping up with products entering the market, some are concerned that their use may lead to a completely different trial that does not address the initial study questions.
Observational studies
“Potential limitations of RCTs can preclude their ability to provide meaningful results on safety and effectiveness in a ‘real-world’ setting,” said Michael Steinbuch, PhD, executive director of epidemiology at Johnson & Johnson. In these cases, noninterventional or observational studies are used.
Advantages of observational studies include their low cost, ability to acquire evidence faster than with an RCT, a potentially longer follow-up period, and sample sizes necessary to investigate rare events.
Dr. Steinbuch demonstrated how observational studies are using registry data to indentify risk factors for revision joint replacements. For example, observational conclusions drawn from Kaiser Permanente National Total Joint Replacement Registry data found a 10 percent difference in revision rates for partial and total knee replacements. When physicians learned about the results, both the number of partial knee replacements and the use of uncemented total knee replacements dropped.
Kurt Spindler, MD, PhD, medical director of Vanderbilt Sports Medicine, presented data from the Multicenter Orthopaedic Outcomes Network (MOON) anterior cruciate ligament (ACL) reconstruction study. The MOON studies involved 7 prospective and 24 retrospective studies on ACL reconstruction patients over 6 years to determine the risk factors for surgical failure.
Results showed that meniscus injury is consistently related to radiographic osteoarthritis and that revision ACL surgery, allograft, smoking, and higher BMI are risk factors for worse long-term outcomes at 6-year follow-up. Other findings from the MOON study are forthcoming and have generated enough data to prompt the launch of the Multicenter ACL Revision Study (MARS), which seeks to “identify clinically useful predictors of outcome [to] inform practice decisions and improve revision ACL reconstruction outcomes.”
“These examples show how communicating evidence through feedback mechanisms can lead to changes in physician practice,” concluded Dr. Steinbuch.
Madeleine Lovette is the communications specialist in the American Association of Orthopaedic Surgeons office of government relations. She can be reached at lovette@aaos.org
Comparative effectiveness research
This is the first of two articles covering the Comparative Effectiveness Research Symposium cosponsored by the AAOS and the Orthopaedic Research Society, and held May 19–21, 2011, in Washington, D.C. The symposium was led by principal investigators Kristy L. Weber, MD, chair of the AAOS Council on Research and Quality; Mark Helfand, MD, MPH, MS, director of the Oregon Evidence-Based Practice Center; and James N. Weinstein, DO, MS, director of the Dartmouth Institute for Health Policy and Clinical Practice. The proceedings of the symposium will be published in ‘Standing Room Only’ audiovisual format.
Additional Links:
2011 AAOS/ORS Comparative Effectiveness Research Symposium
AAOS position statement on comparative effectiveness research
AAOS hosts comparative effectiveness research symposium