Authors: Sheila R. Barnett, M.D., FASA et al
ASA Monitor 9 2018, Vol.82, 60-63.
Just over two years ago, U.S. physicians, including physician anesthesiologists, started participating in the Merit-based Incentive Payment System (MIPS) in accordance with the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA). MIPS replaced previous quality programs to consolidate quality and incentive programs under a single umbrella for physicians, otherwise known to Medicare as “eligible clinicians” (ECs). For participating ECs, quality measures are reported for one of the four MIPS components that result in either positive or negative payments. The underlying premise of the MIPS system assumes that it is possible to: 1) identify measures that truly assess meaningful outcomes or quality; 2) collect data on measures without creating a substantial burden to the EC or the system; and 3) assign an accurate and “fair” payment adjustment.
We are now more than halfway through the second year of the Quality Payment Program (QPP), and many physicians participating in MIPS have probably entertained some serious questions about the program – from the measures available, the link of the measures to actual quality of care and the ability to capture data at all, among other questions. Globally, there have been several journal articles, press releases and blogs questioning the merits and validity of MIPS quality measures. This frustration has not gone unnoticed within the government, and earlier this year, the Medicare Payment Advisory Committee (MedPAC) even recommended that Congress pass legislation to end the MIPS program – a recommendation that was met mostly with silence on Capitol Hill. However, despite the challenges and frustrations, the MIPS program remains intact with tens of thousands of anesthesiologists having reported data in 2017.
It’s easy to sensationalize the frustrations with the government and MIPS – but calls to end MIPS and press releases questioning the validity of quality measurement can be distracting and not necessarily constructive in creating meaningful change. One recent example is an article from the Performance Measurement Committee of the American College of Physicians (ACP) published in the New England Journal of Medicine.1 The authors assessed Internal Medicine Measures – and found “Only 37 Percent of MIPS Measures Are Valid”2 according to self-defined criteria. The study, conducted by a 13-member panel of ACP members, rated 86 MIPS measures (although there are 271 MIPS measures available, not 86) based on five subjective criteria: importance, appropriateness, clinical evidence, specifications and feasibility or applicability.
While none of the ASA-stewarded MIPS measures were included in analysis, the ACP committee did assess several pain medicine measures that thousands of anesthesiologists reported in 2017. A review of their assessment on “MIPS 131: Pain Assessment and Follow-up,” underscores the limitations of the ACP study. ACP labeled the measure as “Not Valid” because raters did not score the measure high on importance, appropriateness, evidence and feasibility. The reviewers believed that using the measure might “promote overuse of opioid therapy” and “fuel the opioid epidemic” (no evidence cited), that the measure does not address functional status (not the intent of the measure) and the measure language around “eliminating” pain is unreasonable. For those reporting this measure, you may recall the measure numerator requires a documentation of a pain assessment on each visit and documentation of a follow-up plan when pain is identified; “eliminating” pain is not included in the measure specification or the measure rationale. To the naïve reader, this article fuels the negative press about MIPS. However, for those anesthesiologists reporting the measure, it may seem perfectly valid and reasonable to report to a registry, track performance and improve patient care.
We are not critiquing the ACP for addressing concerns about measure validity or any members with valid concerns about MIPS reporting, but instead we advise caution on casual interpretation of validity of individual measures and passing judgement on all MIPS measures. We agree the system is far from perfect and there are many decisions made by CMS regarding our own measures that are confusing and frustrating for ASA members and staff. But while our physicians are still bound under the MIPS system, ASA has focused on creating a measure development system that maximizes member participation.
It is not always appreciated that the development of a high-performance measure is a science requiring expertise, rigor and scientific evidence. The MIPS program requires that measures be tested for reliability, validity and feasibility, that they cannot be duplicates of current measures and that they should identify opportunities for improvement. For anesthesiologists, issues of attribution such as which physician is responsible for a particular outcome (e.g., the surgeon, anesthesiologist or both) and issues with “topped out” performance are just two challenges for the specialty. As CMS pushes for more outcome measures instead of process measures, risk adjustment may become increasingly significant requiring careful and deliberate analysis on a complex issue.
For now, MIPS is here to stay. And within ASA, the Committee on Performance and Outcomes Measurement (CPOM) and ASA Department of Quality and Regulatory Affairs (QRA) work together to propose and develop measures based on CMS objectives and member needs. Each year, new measure proposals are collected from the membership, subspecialty societies and from CMS (yes, CMS – each year, CMS releases a measure development plan that identifies their priorities). From these suggestions, potential measure concepts for development are reviewed by CPOM members and Anesthesia Quality Institute (AQI) technical experts in quality reporting, data registry management, coding and methodology. Measures determined to be rigorous and appropriate for use are tested and implemented and may be submitted to CMS for use in programs such as MIPS.
These methods for measure development and testing reflect best practices laid out by CMS, the National Quality Forum (NQF) and other health care stakeholders. During the measure development process, ASA provides opportunities to submit measure concepts, and there is also a public comment period on proposed anesthesia quality measures for the upcoming year. CPOM looks to ASA member feedback as an important part of measure development.
“We are now more than halfway through the second year of the Quality Payment Program (QPP), and many physicians participating in MIPS have probably entertained some serious questions about the program – from the measures available, the link of the measures to actual quality of care and the ability to capture data at all, among other questions.”
CMS has set a high bar for approving measures, and this can lead to frustration with what feels meaningful on a day-to-day basis for the practicing anesthesiologist versus the measure data reported. Members who have participated in significant measure development know that it is especially difficult to define anesthesia-specific outcome measures based on high-level evidence. In response, CPOM and ASA staff have recognized that there are some measures not “ready” for public programs like MIPS, but they are still meaningful for members to report, track and compare against other anesthesiologists. ASA and AQI developed a suite of local improvement measures that will be available for reporting to the AQI National Anesthesia Clinical Outcomes Registry (NACOR) for internal improvement purposes. The measure suite will include measures previously retired by CMS and other high-priority process and outcome measures. Internal Improvement Measures will be optional and not shared with CMS. ASA hopes this opportunity will add to the growing NACOR database and allow practices to track outcomes over time, unhindered by CMS’s annual approval process. Reporting such measures are intended to enhance research and practice and can aid in benchmarking, contract negotiations and practice improvement initiatives and contribute to the foundation of future quality programs. AQI’s NACOR will release more information on the options available for reporting and benchmarking quality measures in 2018 and 2019
Predicting the future of certain federal payment programs and other value-based initiatives is fraught with risk, but it seems likely that quality measure development will remain a fixture in health care for the near future. In their article, the ACP was critical of the MIPS program and performance measures available, and they suggested a ‘timeout’ to pause and reassess physician performance measurement. In reality, “a timeout” is a luxury we are unlikely to be able to afford in our current volatile health care and regulatory environment. CPOM remains committed to developing measures that are well-supported, rigorously developed and improve the quality of anesthesia care, both for reporting programs such as MIPS and for internal improvement purposes for NACOR.
Leave a Reply
You must be logged in to post a comment.