Published 8/1/2008
Charles Turkelson, PhD; Kevin J. Bozic, MD, MBA

What is (and isn’t) an AAOS Technology Overview?

Orthopaedic surgery is a rapidly evolving specialty area with a near-constant influx of new devices, drugs, biologics, and procedures. As a result, orthopaedic surgeons are continually faced with difficult decisions regarding which technologies they should adopt and use in their practices.

Finding and synthesizing the information available to inform these decisions is a difficult challenge for the busy clinician. One option is to read the literature, but often there’s just too much for one person to read. Conversely, for new technologies, it’s difficult to find anything to read. To address these difficulties, the AAOS has created literature reviews called Technology Overviews.

Evidence-based processes
Technology Overviews, like the AAOS’ Clinical Practice Guidelines, are developed using the processes of evidence-based medicine to minimize bias, enhance transparency, and promote reproducibility. The reason for minimizing bias is obvious: it’s difficult to make the right decision based on only some of the information or distorted information. Increased transparency helps facilitate this goal by asking precise questions about the technology in question, framing specific “rules of evidence” that state what kinds of studies will be examined and what kind won’t, comprehensively searching for all available, relevant studies, and documenting all of the relevant published findings.

To gain a better understanding of how this process works, let’s look at the AAOS’ first Technology Overview—on gender-specific knee replacements. The full Technology Overview can be found in the February 2008 issue of the Journal of the AAOS.

Gender-specific knee
The gender-specific knee replacement overview posed the following questions:

  • Do women have higher failure rates than men after traditional knee surgery?
  • Do gender-specific knee replacements increase the rates of successful knee replacement surgery in women?

These questions focused on results (measures of surgical success) that are most important to patients. They were developed before any other work on the overview started. Asking the questions first and holding firm to them means that the overview cannot go in new directions if its authors do not like the answers to the initial questions. This helps combat bias.

The overview also contained nine specific rules for determining what kind of articles were (and were not) considered. Among these were that the study must be a full report and that the subjects included in the study must be humans. To prove that the rules were not chosen arbitrarily, the overview explains why some of them were adopted. As with the questions, these rules of evidence were developed before any literature search and before data from any articles were examined. This ensured that articles were not selected to make a specific point.

The rules did not limit the articles that would be considered to reports of studies published by “experts” or “authorities” on a particular topic, simply because experts and authorities are not always right. Others without reputations can also be right, so their evidence should be examined. For similar reasons, the rules didn’t allow consideration of traditional review articles because traditional reviews can be written to support a particular point of view. When this happens, traditional reviews are more like op-ed pieces than scientific works.

The rules implicitly defined evidence for overviews as evidence derived from the published, peer-reviewed, medical literature. Literature developed by a device or drug manufacturer and meeting abstracts were not considered. Although meeting abstracts can be important, approximately half are never published as full papers and, therefore, never undergo the scrutiny of the full peer-review process. Furthermore, meeting abstracts are too short to allow critical evaluation of the methods used in any given study.

Searching for articles
After the questions were posed and the rules of evidence established, the searches for relevant articles began. For the gender-specific knee overview, a comprehensive review of PubMed was conducted. Recently, the AAOS expanded its electronic searches to include Embase, another large electronic, bibliographic database, which will also be used for future overviews.

This search was followed by a review of the bibliographies of all retrieved articles to ensure that the searches didn’t miss anything. The strategies used to search PubMed were published in the overview so that readers could replicate it and satisfy themselves that articles were located using objective methods.

Unlike the AAOS’ Clinical Practice Guidelines (which are also evidence-based documents), Technology Overviews do not make any recommendations about whether to use a device, drug, biologic, or procedure. Rather, overviews are educational tools designed to assist readers in coming to their own conclusions about the available evidence. Therefore, each study is assigned a Level of Evidence. Levels range from I to IV; confidence in a study’s results increases as its Level of Evidence approaches Level I.

To help physicians (and their patients) make decisions, all results of studies relevant to an overview are included. This does not mean that the overview repeats the study’s conclusions. Instead, numbers such as differences between preoperative and postoperative pain and function scores, group averages, the differences between those averages, statistical significance, and similar data are presented.

Audit it yourself
Providing these rules and these data in an AAOS Technology Overview invites readers to audit all aspects of the overview. Although many readers will not take advantage of this opportunity, making the information available enables those who are interested to conduct an audit. If you want to audit an overview (or any other evidence-based document), see “Tips for conducting a technology overview audit” below.

The AAOS’ Technology Overviews, like its Clinical Practice Guidelines, are reviewed by external governing bodies and organizations. Clinical Practice Guidelines are reviewed by representatives of many specialties, not only orthopaedic surgeons. The task of the reviewers is to ensure that the AAOS’ conclusions are evidence-based. The review for Technology Overviews is quite different; reviews of all relevant manufacturers are solicited.

Because overviews don’t include conclusions or recommendations, the responsibility of the reviewers is to ensure that the overview has addressed all of the relevant literature and to determine whether the overview has inadvertently guided physicians to a particular conclusion.

Readers are also welcome to review the AAOS Technology Overviews. To provide your feedback on the gender-specific knee overview, visit www.aaos.org/research/overviews

You will also be able to provide feedback on future overviews. Your comments will let the AAOS know whether it is “on the right track” and how overviews can be modified to best help you and your patients make decisions regarding the appropriate use of orthopaedic procedures and technologies.

Charles Turkelson, PhD, is director of the AAOS research department; he can be reached at turkelson@aaos.org

Kevin J. Bozic, MD, MBA, is a member of the Guidelines and Technology Oversight Committee; he can be reached at kevin.bozic@ucsf.edu

Tips for conducting a technology overview audit
How do you audit a Technology Overview (or any other evidence-based document)?

Here are a few questions to ask:

Does the document contain a list of all included articles?
Usually, this list is composed of the tables that document the results of all included studies. If a document doesn’t include these tables, see whether it refers you to where they can be found (often they are posted on the Internet). If no such list or no such tables exist, you can’t verify that the document was objectively prepared.

Does the document address only articles that support a single point of view?
If conflicting evidence is noted in the evidence tables but not addressed anywhere else, it is possible that those conflicting articles, although retrieved, were not seriously considered.

Are evidence-based processes really being followed?
Documents not strictly prepared according to evidence-based processes often contain “supplemental analyses” or recently published articles located after the searches of electronic databases were conducted. The desire to include recent articles is understandable, but searches for such articles are rarely exhaustive. New articles might have been included only because they supported a particular point of view.

Does the document contain articles you haven’t heard of?
Although not a good criterion to use for new technologies (where only a few articles may exist and you may be familiar with all of them), an evidence-based analysis of more established technologies is likely to contain articles that are new to you. These articles probably come from journals you don’t regularly read and may find difficult to access. If the articles listed all come from “standard” sources, then the search for articles may not have been all that exhaustive.

Does the document distinguish between patient-oriented outcomes and intermediate (or surrogate) outcomes?
Patient-oriented outcomes—such as pain relief, improved function, and increased ability to perform daily activities—directly measure whether a treatment helps patients live healthier, happier, or longer lives. Intermediate and surrogate outcomes—such as biomarkers and imaging results—are often used as substitutes for patient-oriented outcomes. Unfortunately, use of intermediate/surrogate outcomes typically overestimates the effects of a treatment and does not capture the side-effects (harms) of that treatment. A sound evidence-based document not only distinguishes between patient-oriented and intermediate/surrogate outcomes, it also gives priority to the former. Documents that give patient-oriented and surrogate outcomes equal weight may be biased.

Do the evidence tables contain only numbers or text that repeats the conclusions of the authors of the original article?

If the latter, the authors of the document are probably not being too critical of the literature they are evaluating. This might not guarantee bias, but it does hint at a lack of sophistication.

What kinds of criticisms are used in evaluating individual articles?
Two basic aspects of an article can be criticized: its design, conduct, and analysis and the applicability of its results. Criticisms of a study’s design, conduct, or analysis are very serious. They mean that the study’s results cannot be trusted under any conditions. For example, the conclusions of a study that lost a very large proportion of its patients during follow-up or that used improper statistical methods cannot be trusted to apply to any patients.

Criticisms of a study’s applicability are not necessarily as serious. Such criticisms suggest that the treatments used or the patients enrolled in a study may not be that applicable to the patients seen in actual clinical practice. The results of a study that enrolled unusual patients (but that had no other flaws), however, are valid for those patients. Furthermore, results from studies of low applicability do not prove that the study findings do not apply to patients like those seen in actual practice. Another study is required to determine this. Authors who use criticisms of applicability without noting that future research is needed to establish the generality of the findings may be biased.

Have the authors sought out differing opinions?
You can check this by examining not only who prepared the document, but also by who reviewed it.