We will be performing site maintenance on AAOS.org on February 8th from 7:00 PM – 9:00 PM CST which may cause sitewide downtime. We apologize for the inconvenience.


Published 3/1/2008
Charles Turkelson, PhD

Learning to read–all over again

Identifying bias requires a different “read”

With the increased emphasis on developing clinical practice guidelines and Technology Overviews, many AAOS fellows are spending more time reading medical literature. Unfortunately, much of that literature is of less than optimal quality and seemingly contradictory. As a result, it’s all too tempting to believe the information that supports what you already think is true and discount the information that disagrees with your opinion. In fact, a considerable body of psychological research shows that people tend to do just that.

But clinical practice guidelines and technology overviews must be unbiased, transparent, and reproducible. To get there, the AAOS relies on developing systematic reviews, which require the reader to take a different approach to medical literature.

Focusing on methods, results
One of the first steps in systematically reviewing literature is to read articles differently. Instead of emphasizing the “Conclusions” section of an article, a critical reader puts more emphasis on the “Methods” and “Results” sections. In doing so, the reader is seeking to determine whether the authors’ conclusions are supported by their own data and analyses by actively determining whether the design and conduct of the study are sound enough to warrant drawing any conclusions from the article.

Several studies suggest that this is an all-too-necessary step in reading the medical literature. For example, Ezzet found that commercially funded studies were significantly more likely to report “good” results after total hip (Fig. 1) or total knee (Fig. 2) arthroplasty than independently funded studies. Other studies have supported this finding. Although these results do not prove that commercial funding causes a bias in reporting, they are suggestive. Nonorthopaedic research studies have also indicated possible links between the source of the funding and the study’s results.

Perhaps even more disconcerting is that flawed statistical analyses may lead authors to come to incorrect conclusions. Vrbos and associates “observed misleading representations of clinically significant results and questionable experimental conclusions” in almost half of the publications they examined. Although this report is admittedly dated, no evidence exists to suggest that the statistical methods employed by authors have subsequently improved. As a result, readers should have a healthy skepticism of authors’ conclusions—not a new idea, but fundamental to all science. Research, be it basic or clinical, should be a process of skeptical inquiry.

Asking the right questions
Because systematic reviews employ scientific methods, conducting them has been called secondary research. Not surprisingly then, a systematic review, like any good research study, begins by posing specific questions. Doing so not only focuses the scope of the review, but also acts to combat bias.

A review’s questions typically specify the patients, interventions, comparisons, and outcomes of interest. Specifying these parameters in advance prevents the introduction of modified or new questions at the end of the process if the answers to the original questions did not yield the “desired” answer. Because these questions so specifically detail what and who is of interest, readers have an intellectual audit trail; they assure readers that evaluating the literature was a planned—not a haphazard—effort.

Determining what’s in and what’s out
The next step is to construct criteria for including and excluding articles from consideration. These criteria are a systematic review’s “rules of evidence,” constructed to ensure that articles are not picked just because they support a particular point of view or because they were authored by a specific individual.

Setting the inclusion and exclusion criteria before searching for any literature helps reach this goal, and publishing these criteria as part of a systematic review allows readers to verify that the rules were actually followed. To help ensure transparency and combat bias, the AAOS discourages fellows who are authoring guidelines or working on technology overviews from exchanging articles at the beginning of a project. Doing so usually reveals the opinions of the authors and could bias the process.

These processes for deciding which papers to include contrast with how articles are chosen for a traditional review. In a traditional review, authors tend to selectively include articles that support their own views. Authors of traditional reviews also tend to cite their own papers. Although self-citation may be necessary, repeated self-citation could serve to perpetuate an author’s opinions or falsely validate the conclusions of that author. A systematic review’s “rules of evidence” are designed to prevent these biases. Typically, these rules exclude studies of the weakest design, very small studies, animal studies, studies that may be too old to reflect current practice, and meeting abstracts.

Conducting the search
Next comes a comprehensive search for articles that meet the inclusion criteria. These comprehensive searches combat bias by preventing the reviewer from considering only articles that support a certain conclusion. Searching PubMed is not sufficient to attain this goal. Wilkins and associates found that PubMed contains fewer than half of the clinical trials published on some orthopaedic topics. Consequently, reviewers also search EMBASE and other databases. Only those articles that meet the inclusion criteria are actually retrieved.

Good systematic reviews contain a great deal of documentation about the disposition of each full article that is retrieved. This documentation includes specifying the number of articles identified, the number of articles retrieved, the number of articles included, and the number of articles excluded and the reasons for not including them. This documentation illustrates the efforts reviewers make to be transparent. It also provides readers with an intellectual audit trail and assures them that bias is being combated.

Assessing quality
The quality of each study is then evaluated. The AAOS does this by using a Levels of Evidence system, which can be viewed online at

Readers can have more confidence in the results of high quality studies than in the results of low quality studies. Evaluating the quality of the literature, particularly by rules that were constructed before any articles were retrieved, also counteracts bias; doing so prevents one from being more critical of studies that disagree with any predetermined ideas than studies that do.

Several approaches can be used to analyze the data; the approach used depends on the quality of that data. Meta-analysis is typically used when relatively high quality data (usually data from randomized controlled trials) is available. In other instances, a narrative review is used. A well-done systematic narrative review is not a study-by-study description of evidence but a synthesis of evidence across studies.

If this sounds like a lot of work, it is. In fact, it is more work than a single physician can reasonably be asked to do. The AAOS, in undertaking this work for its clinical practice guidelines and Technology Overviews, is easing this burden for members. To see some of the products of these efforts, visit the Research page of the AAOS Web site at www.aaos.org/research

References for the studies cited in this article can be found in the online version at www.aaosnow.org

Charles Turkelson, PhD, is the AAOS director of research. He can be reached at turkelson@aaos.org