Comparative effectiveness is not a new idea; it is simply comparing two (or more) treatments to determine which is most effective. In terms of a controlled clinical study, this simply means comparing a treatment of interest to another treatment, instead of to a placebo.
Comparative effectiveness can be determined by asking several kinds of questions. Some questions compare similar treatments (competing drugs), while others focus on different treatment approaches (surgery versus drug therapy). Comparative effectiveness research might also ask which types of patients may benefit most from a particular therapy.
Although some questions may address costs and attempt to frame answers in terms of cost minimization, a better way is to balance the dollar costs of a treatment with its benefits, also expressed in monetary terms. These cost utility analyses are preferred by methodologists. In terms of comparative effectiveness research, cost utility analyses compare the ratio of dollar costs of a treatment and some quantified measure of benefit (such as quality-adjusted life years) for two or more treatments.
Comparative effectiveness can be determined using one or more research designs, such as randomized controlled trials (RCTs), studies of claims records, or medical registries. Different comparative research methods are summarized in Table 1.
Comparative effectiveness reviews
Comparative effectiveness determinations can also be made using systematic reviews. Such reviews are called comparative effectiveness reviews and address clearly formulated questions, using explicit, systematic methods to identify, select, and critically appraise relevant research, and to collect and analyze data from the studies included in the review. Statistical methods (meta-analysis) may or may not be used.
The goals of a systematic review are to combat bias, and to assure readers (through extensive documentation) that the review’s conclusions are indeed unbiased. Ideally, any reasonable person should be able to apply the same processes and methods used in a given systematic review and arrive at the same conclusions as the review.
In recent years, several legislative efforts aimed at comparative effectiveness research legislation have been introduced. Perhaps the best known is HR 1, most commonly referred to as the stimulus bill.
Signed into law on Feb. 17, 2009, this bill provides a total of $1.1 billion for the Agency for Healthcare Research and Quality (AHRQ), the National Institutes of Health, and the Department of Health and Human Services to fund clinical comparative effectiveness research. It also mandates the creation of the Institute of Medicine’s Committee on Comparative Effectiveness Research Priorities and the Federal Coordinating Council on Comparative Effectiveness Research (FCCCER).
Other measures under consideration would require the insurance industry to share in the funding of comparative effectiveness research and expand the role of FCCCER. Most of these bills include language on comparative effectiveness research on devices and procedures, making comparative effectiveness research on orthopaedic topics likely. The relatively high cost and frequency of some orthopaedic procedures may also prompt comparative effectiveness research and reviews.
AHRQ will probably play a major role in future comparative effectiveness efforts. It is leading efforts to develop and improve methods for comparative effectiveness research, significant because most proposed legislation would consider observational (nonrandomized controlled trials) studies as valid for evaluating comparative effectiveness. This does not imply that case series will be used in comparative effectiveness research, because the results of such studies are very difficult to interpret.
Impact on orthopaedics and the AAOS
The pressure on orthopaedics to demonstrate the effectiveness of its treatments is not likely to decrease. Orthopaedics, along with the rest of medicine, will feel increased pressure, particularly high cost, high volume procedures.
Comparative effectiveness research may also mean that greater emphasis may be placed on higher quality orthopaedic research. Comparative effectiveness RCTs are more realistic than RCTs that compare a surgical treatment to a sham control group. For example, it may be more ethical to compare one surgical technique to another surgical technique than to compare a surgical technique to a sham operation. Orthopaedic surgeons should be aware that many of the arguments advanced against RCTs (such as the inability to blind patients) would be weakened in an environment of comparative effectiveness. The stature of uncontrolled studies, which are very difficult to interpret, is unlikely to be enhanced by comparative research.
An increased emphasis on comparative effectiveness research parallels the Academy’s current activities in this area. For example, AAOS staff routinely conduct systematic reviews when developing clinical practice guidelines. Similarly, the first three AAOS technology overviews are really “comparative effectiveness overviews.”
Comparative effectiveness research certainly provides a better basis for decision-making than the historic approach, which simply determines whether a given treatment is better than no treatment at all (placebo). Comparative effectiveness research focuses on the relevant clinical question, “What is the best care that can be given to patients?” In the end, that is the most important question and the one that best serves orthopaedic surgeons and their patients.
Charles M. Turkelson, PhD, is director of the AAOS department of research and scientific affairs. This article is based on a white paper he prepared for the AAOS Board of Directors.