Fig. 1 The stepwise approach to developing performance measures.


Published 9/1/2016
David R. Chandler, MD

Measuring Up: The Performance Measure Development Process

Performance measure development progresses through the following five phases, as depicted in Fig. 1:

  • conceptualization
  • specification
  • testing
  • implementation
  • use and continuation of maintenance

A link to the flow chart of the full process that AAOS implements for development can be found here.

In the first phase of measure development, conceptualization, an idea about healthcare quality improvement is proposed. The measure should explicitly align with the goals and objectives under the Centers for Medicare & Medicaid Service's (CMS) Quality Strategy. It should also address a performance gap—a known variation in performance based on information contained in evidence-based clinical practice guidelines or appropriate use criteria.

Measure development should consider the following evaluation criteria:

  • importance
  • scientific acceptability of measure properties
  • feasibility
  • usability and use
  • consideration of related and competing measures

The last item is used to ensure that the measure complements other measures and conforms with the specifications of currently existing measures.

Developing a business case for an evidence-based measure concept is critical. Stakeholder input is obtained by assembling a Technical Expert Panel (TEP), including both patient/caregiver and provider input, which are equally important, and soliciting public comment. The TEP includes representatives from all areas of health care surrounding the measure. For example, the TEP for performance measures on hip fractures in the elderly included members of the AAOS, the American Association of Hip and Knee Surgeons, and the Orthopaedic Trauma Association, as well as physical therapists, nurses, emergency room physicians, anesthesiologists, and physical medicine and rehabilitation physicians.

Measure specification involves drafting the measure and conducting initial feasibility studies. The precise technical specification of performance measures includes the measure name or title; a description of the measure; the initial population covered by the measure; a denominator; a numerator; exclusions and exceptions; data sources; key terms, data elements, and codes; the unit of measurement or analysis; sampling; risk adjustment; time windows; measure results; and the calculation algorithm.

The denominator is specified by a statement that defines the target population and is formatted to include the age ranges, diagnosis, procedure, time window, and other qualifying events. The numerator is also specified by a defining statement and includes the number of denominator-eligible patients who satisfy the process, condition, event, or outcome that is the focus or intent of the measure.

Posthospitalization care measures are to be included. The goal is to have a mixture of process measures and patient-reported outcome process measures (PRO-PMs).

The results of structure and process measures are usually entirely within the control of the measure provider. Outcomes measures, however, may be affected by factors beyond the measure provider's control. These measures may require risk adjustment and/or risk stratification. Risk adjustment involves statistical modeling to adjust for differences in population risk factors before comparing the outcomes of care. Risk stratification refers to reporting outcomes separately for different groups unadjusted by a risk model. The National Qualify Forum (NQF) recommends using both risk adjustment models and risk stratification when application of the risk model alone would obscure important healthcare disparities.

Measure testing is used to evaluate the validity and reliability of the measure specifications. After a testing work plan is approved, alpha and beta testing are conducted to obtain data for analysis, which can then be used to refine the measure before it is submitted to CMS for approval and implementation.

Measure implementation proceeds through the NQF endorsement process with the measure developer's Contracting Officer's Representative (COR). The measure developer refers the completed measure to the COR for approval before submitting it to the NQF. The COR completes the submission work for the NQF. The five evaluation criteria discussed under Conceptualization are applied. Measures under consideration by CMS are required to be submitted to the Measure Application Partnership for NQF review. NQF endorsement is not required, but CMS must publish the rationale for selecting any measure that is not NQF-endorsed.

Implementation after selection involves the following:

  • development and issuance of rules
  • development of coordination and roll-out plans
  • implementation of the roll-out plans
  • implementation of the data management process
  • development of the auditing and validation plans
  • implementation of the educational process
  • conducting a dry run (per the discretion of the COR)
  • reporting requested information to CMS before the measure is rolled out and monitoring begins

Use and continuation of maintenance
Measure use and continuation of maintenance involve data collection with reporting of measure results. The measure is evaluated through environmental and literature scans and re-evaluation of the measure and business case. Measure maintenance is reported through three basic measure reviews: measure update, comprehensive re-evaluations, and ad-hoc reviews.

David R. Chandler, MD, is a member of the AAOS Performance Measures Committee.

Editor's Note: This is one of an ongoing series of articles prepared by the AAOS Performance Measures Committee. A previous article, "Measuring Up: Defining Performance Measures,"appeared in the AAOS Now August issue. A future article will look inside the AAOS performance measure development process, with a focus on measures developed for the management of hip fractures in the elderly.

Additional Information: