The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
INFLUENTIAL PUBLICATIONSFull Access

Quality Improvement in Psychiatry: Why Measures Matter

Published Online:https://doi.org/10.1176/foc.9.2.foc232

Abstract

Increasing attention has been directed in healthcare today to the importance of performance measurement,(i.e., the implementation of measurable methods to demonstrate that practitioners are engaged in high-quality, evidence-based medicine). Many medical specialties, as well as many state medical licensing boards, now require that candidates submit performance measurement data, to be eligible for maintenance of board certification or medical licensure. National organizations such as the National Quality Forum and the Physicians Consortium for Performance Improvement of the American Medical Association are active collaborators with federal, state, and medical specialty initiatives to improve healthcare. These developing efforts are summarized here, with a specific focus on the status of these efforts in the field of psychiatry.

(Reprinted with permission from Journal of Psychiatric Practice 2007; 14(suppl 2):8–17)

Consumers of health care expect clinicians to provide high-quality care as a matter of routine. The Institute of Medicine (IOM) defines high-quality healthcare in terms of the “degree to which health care services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (1). Although healthcare professionals obviously want to deliver quality care, national studies have shown that often this does not occur (2). Evidence-based clinical practice guidelines summarize current professional knowledge and identify best care practices for optimizing patient management. As defined by the Department of Veteran's Affairs (VA), such guidelines provide “recommendations for the performance or exclusion of specific procedures or services derived through a rigorous methodological approach that includes the following:

•. 

Determination of appropriate criteria, such as effectiveness, efficacy, population benefit or patient satisfaction; and

•. 

Literature review to determine the strength of the evidence (in part based on study design) in relation to these criteria” (3).

Evidence-based clinical practice guidelines are now available for many major mental health and substance use disorders. The American Psychiatric Association (APA) and the VA have both invested in the development, maintenance, and dissemination of information on the clinical efficacy of psychiatric treatments based on findings in well conducted clinical trials. The APA publishes each guideline and guideline revision as a supplement to the American Journal of Psychiatry. The guidelines are also available online along with a continuing medical education (CME) course and quick reference summaries (4).

Unfortunately, gaps between the clinical care recommended in evidence-based practice guidelines and the actual care that is delivered have been found in all clinical specialties, including psychiatry. In 2002, the President's New Freedom Commission on Mental Health noted the underuse of evidence-based practices in the treatment of severe mental illness (5). In 2002, Bauer et al. reviewed 41 studies involving adherence to specific mental health guidelines (26 cross-sectional investigations done after guideline release, 6 studies conducted before and after release of guidelines, and 9 controlled trials of specific interventions) and found adequate adherence in only 27% of the cross-sectional and pre/post studies (6). In 2003, McGlynn et al. reported that only 54.9% of a sample of randomly selected individuals from 12 metropolitan areas in the United States received recommended care based on 439 indicators of quality of care for 30 acute and chronic conditions as well as preventive care (2). For example, they found that only 10.5% of patients with alcohol dependence received recommended care involving 5 indicators, while 57.7% of patients with depression received recommended treatment involving 14 indicators. In 2005, the IOM released a follow-up to its 2001 groundbreaking document “Crossing the Quality Chasm” (1) concerning quality of care for mental and substance use conditions, in which it reported that only 24% of 21 studies had documented adequate adherence to specific recommendations in clinical practice guidelines for the treatment of various mental and substance use disorders (7). Other studies have also reported significant variation from evidence-based practice guidelines in the treatment of adult depression, bipolar disorder, panic disorder, schizophrenia, anxiety disorder, and pediatric depression (813).

CAUSES OF CLINICAL VARIATION

A variety of factors—many of which are legitimate parts of good clinical practice—can lead to variations in care. However, variation that reflects the over-, under-, or misuse of clinical interventions is undesirable and can affect outcomes and waste scarce clinical resources. Overuse of clinical resources includes use of inappropriate treatments for which evidence of benefit is lacking; underuse includes failure to deliver, or a significant delay in delivering, care that would be beneficial; misuse includes use of clinical interventions that may pose significant safety risks. Studies have shown that, simply because information is available and disseminated in clinical literature orguidelines, clinical practice does not routinely reflect the adoption and use of even those practices that are wellgrounded in research studies and on which there is strong clinical consensus (14).

Clinician-related factors

A number of clinician-related factors contribute to the gap between evidence and practice:

•. 

The time-consuming process involved in identifying, reading, analyzing, and applying research evidence applicable to a specific patient scenario (15).

•. 

The emphasis traditionally placed on clinical experience and intuition, often referred to as the “art” of medicine. The validity of this approach is questionable due to the small Ns of previous experience and the unreliability of recollection (15).

•. 

Lack of awareness of the availability or content of evidence-based practice guidelines (16).

•. 

Delay in implementation: The dissemination of clinical practice guidelines does not, by itself, change practice (17). With the exception of some new technologies and pharmaceuticals, the timeline for incorporation of new information into daily practice can exceed a decade (18).

Factors related to the healthcare system

A number of factors associated with the healthcare system itself can also contribute to the gap between evidence and practice:

•. 

Lack of widespread use of practice guidelines in graduate training: In 2002, Hoge et al. reported that practice guidelines, even those developed by appropriate organizations using rigorous methodology, were not being widely used in the classroom or in supervised clinical experiences in graduate psychiatry training programs (19).

•. 

Poorly organized healthcare delivery systems: Numerous reports, including the 2005 IOM report mentioned above (7), have emphasized the negative effects of the decentralized and fragmented system that supports behavioral health care in the United States. The current organizational system requires multiple steps and clinical handoffs and has resulted in care that is less than desirable in terms of timeliness, effectiveness, and appropriate uses of resources.

•. 

Uneven availability of resources: Discrepancies in the availability of practitioners in urban versus rural areas affect the ability to access and deliver appropriate care. In addition, variability in financial resources, including lack of parity in mental health coverage, influences the degree to which recommended care is adopted.

Need to individualize treatment

When discussing implementation of evidence-based guidelines, the issue of how to tailor population-based practice recommendations to achieve individualized, patient-centered care invariably arises. Professional judgment is an important and accepted component in addressing the complex scenarios that occur in the treatment of chronic illnesses. For example, patients with chronic illnesses frequently have multiple comorbid conditions and may be taking multiple medications, thus increasing the complexity of treatment decisions. Thus, individualizing treatment involves making exceptions in order to tailor therapeutic regimens to patients' unique circumstances and be responsive to their preferences and values. Such individualization of treatment can make it more difficult to implement guidelines and use performance measures. The use of comprehensive and accessible medical records (e.g., electronic health records) could assist in making such treatment decisions. Nevertheless, it is impossible to estimate how often exceptions to evidence-based recommendations occur without some type of tracking process.

REDUCING UNDESIRABLE CLINICAL VARIATION

A number of strategies have been proposed for improving the delivery of mental health care. These include policy changes, reducing system fragmentation, increasing clinician competencies, empowering consumers, implementing chronic disease case management and collaborative care models, and implementing clinical quality improvement and performance measurement.

In its 2005 report, the IOM suggested several critical pathways for improving the quality of mental health care, so that it is safe, effective, patient-centered, timely, efficient, and equitable (7). Among these critical pathways, the IOM stressed the need for improvement in how quality of care is measured, noting that the quality measurement and improvement infrastructure in the behavioral health field is weaker than that in place for the general healthcare system. While the IOM report focused on the role of national organizations in addressing this weakness in infrastructure, it also identified two ways in which individual clinicians and provider organizations should become involved:

1. 

Increasing use of valid and reliable questionnaires or other patient-assessment instruments to assess outcomes of treatment (patient-centered measurement)

2. 

Using measures of processes and, when available, outcomes of care to continuously improve the quality of care delivered, utilizing techniques such as data feedback and process redesign (20).

Quality improvement and performance measurement

Quality improvement (QI) initiatives, building on the definition of quality provided by the IOM, support clinicians in providing care that is “consistent with current professional knowledge.” In 2007, The American College of Physicians published an article in the Annals of Internal Medicine based on a report from The Hastings Center, an independent nonprofit bioethics research institute, in which clinical QI was defined as “systematic, data-guided activities designed to bring about immediate improvements in health care delivery in particular settings” (21). The report described QI as a form of experiential learning that involves deliberate actions that are expected to improve care and are guided by data reflecting the impact of those actions. The report indicated that QI activities are an appropriate approach to professional oversight of clinical practice.

The improvement curve

QI is the third step in a three-phase process that has been termed the “Improvement Curve” (Figure 1) (22).

Figure 1.

Figure 1. The Improvement Curve

Phase 1: The clinical uncertainty phase.

Initially, little consensus or evidence exists regarding the most effective methods of caring for patients with a specific condition. As time passes, well conducted clinical trials and expert consensus begin to identify recommended approaches to care. However, at this point, overall adherence to these approaches is usually low because of a lack of widespread familiarity with this information.

Phase 2: The clinician education phase.

During the second phase of the Improvement Curve, focused educational efforts facilitate dissemination of evidence-based clinical guidelines for care. As a result of these educational efforts, adherence to practices consistent with current professional knowledge improves. However, studies have shown that improvement rates associated with educational efforts alone tend to level off as adherence reaches 60%–75% (22).

Phase 3: The support systems phase.

During the third phase of the Improvement Curve, QI methods are introduced to promote quality care and to support improvement. At this phase, it is assumed that failure to provide care that is consistent with current professional knowledge is no longer due to lack of familiarity with the evidence, but rather to a breakdown in the processes of care that are integral in supporting providers' daily practice. Two QI methods can be used to address these problems.

1. 

QI performance measures

•. 

Assess quality of care with accurate data to compare what is done or achieved to what is desired according to current clinical evidence.

•. 

Provide objective feedback to show the degree to which clinical practice mirrors or varies from evidence-based recommendations.

2. 

QI resource tools

•. 

Support ongoing daily practice-based processes necessary to provide evidence-based care.

•. 

Provide tools (e.g., alerts, flow charts, check lists, algorithms, and encounter forms) formatted to act as reminders to prompt the desired process of care.

•. 

Encourage use of electronic health records to facilitate coordination of care, accuracy of records, and collection of data for QI.

Figure 2 presents a model for continuous QI developed by Deming and Juran that links performance measurement and information feedback to changes that lead to improvement (23).

Figure 2.

Figure 2. The Physician Continuous Quality Improvement Cycle

The performance measurement process

Performance measurement involves collection, analysis, and reporting of data that guide QI activities. The “performance” of healthcare providers includes 1) systems and processes in place to provide health care, 2) intermediate results, and 3) long-term outcomes. Of these three components, long-term outcomes, while the most meaningful, are also the most difficult to measure. Because long-term outcomes are so difficult to quantify, performance measurement usually depends on measuring processes or intermediate results. The process of care reflects interactions between clinician and patient, and the selection of interactions that should take place and when they should occur.

Clinical performance measures assess the difference between recommended clinical processes and actual practice patterns. An individual evidence-based clinical performance measure produces a quantitative assessment of the quality of care as currently defined by evidence-based guidelines. Monitoring that uses process-related performance measures determines the extent to which a particular evidence-based practice is conducted. The numerical calculation is the rate at which an appropriate activity is performed (defined in the numerator) in a defined population (defined in the denominator). Such monitoring provides feedback to the clinician on his or her aggregate patterns of care and how that care corresponds to current clinical recommendations. Measuring performance and providing ag-gregated feedback to physicians have been shown to have a positive impact on the care provided, especially when baseline adherence to recommended practice is relatively low (24).

Measurement is, of course, not an end in itself; rather it is the first step in a cycle in which information is used to analyze and improve care (Figure 2). Performance measures can be used in multiple ways in this process:

•. 

To provide a baseline understanding of practice patterns at a point in time or over time

•. 

To identify areas on which to focus attention and implement improvement interventions

•. 

To assess the impact of improvement interventions

•. 

To monitor the process of providing care over time to demonstrate quality.

Just measuring and providing data to healthcare professionals will not, in isolation, bring about improved adherence to the processes of care outlined in evidence-based clinical practice guidelines. To achieve desired improvement, the required changes in practice patterns must be facilitated and supported by activities at the point where care is provided. Such quality-improvement activities include point-of-care flags or reminders as well as documentation systems that provide aggregated information to facilitate review of response to therapy over time. When an activity-based change is determined, through re-measurement, to have supported the desired improvement, then that activity should be adopted into the ongoing daily practice of an individual provider or the organization/site of care that supports that provider, so that the targeted practice does not return to pre-intervention levels. This final step completes the cycle of measurement-based quality improvement.

THE IMPACT OF PERFORMANCE MEASUREMENT

Clinicians need to be informed about public quality improvement and performance measurement initiatives and play a role as early collaborators in shaping policy and application in these areas. The IOM report suggested that a performance measurement system should provide information for multiple uses, including provider-led improvement efforts, public reporting, payment and benefit design, and population health initiatives. Currently, efforts are underway to implement public reporting, performance incentivization, and professional education and engagement.

Internal versus external performance measurement

The quality improvement cycle described above (Figure 2) involves internal use of performance measurement within the healthcare community to study and improve quality of care. Professional commitment is the significant motivation to encourage physicians to participate in performance measurement initiatives designed to provide them with feedback comparing their performance with national benchmarks and identifying areas for improvement.

However, an important and growing external incentive for performance measurement exists in the public healthcare arena, where the stated goal is to provide objective measures of the competence of healthcare providers, persons, or organizations and their ability to deliver healthcare value, translated as high-quality care and cost-effective resource utilization. Physicians are increasingly being held accountable for the quality of care they provide, with every proposal for healthcare reform now including requirements for measuring performance. While these initiatives were initially focused on general medical-surgical specialties, this movement has rapidly spread into the field of psychiatry. Nevertheless, as reported by the IOM in 2005 (7), the infrastructure needed to measure, analyze, and publicly report data on mental health and substance abuse care remains less well developed than that for general health care (25).

The purposes for which external performance measurement is used depend to some extent on the stakeholder involved, although the goal is always related to accountability for quality of patient care and is frequently also relevant to purchasing decisions (Table 1). Payers for healthcare services are increasingly demanding information on the quality of the health care they are purchasing. Current initiatives are likely to lead to future requirements that physicians participate in national programs of performance measurement as a prerequisite to receiving payment or being included in approved panels. Public pressure for competency assessment has also led to professionally directed initiatives for performance measurement and QI to assure high quality care. Maintenance of certification (MOC) in many specialties requires the measurement of performance in selected clinical domains (see discussion of Board Recertification below).

Table 1. Use of Performance Measurement by Different Stakeholders

Table 1.

Table 1. Use of Performance Measurement by Different Stakeholders

Enlarge table

Public Initiatives

The U.S. Department of Health and Human Services (HHS), as the cabinet level federal agency with responsibility for the Centers for Medicare and Medicaid Services (CMS), the Agency for Healthcare Research and Quality (AHRQ), the Substance Abuse and Mental Health Services Administration (SAMHSA), the Food and Drug Administration (FDA), and the National Institutes of Health (NIH), has launched a significant public-private collaborative initiative to improve the quality and value of the health care delivered in this country. Based on the hypothesis that public reporting is the surest way to achieve better health care at lower cost, the Value-Driven Health Care (VDHC) Initiative aims to provide the public with information about the quality and cost of services delivered by healthcare providers (26). This ambitious multi-year initiative will require the many stakeholders in the national healthcare system, including providers, consumers, employers, health plans, unions, government entities, and others, to work together in collaborative voluntary efforts. Participants in the VDHC Initiative commit to the following four objectives or “cornerstones” of what has also been termed “The Transparency Initiative”:

1. 

Health information technology: use of health information technology standards to connect the components of the healthcare system and permit ease of communication and data exchange.

2. 

Reporting on quality: measuring and reporting performance of hospitals, physicians, and other providers; providing public benchmarks and comparative information.

3. 

Reporting on price: measuring and publishing price information tied to quality data in order to give the public information on the value of healthcare services.

4. 

Incentives for quality and value: providing incentives for quality and value to support both those who provide and those who purchase high-quality, cost-effective care.

In 2006, President Bush signed an executive order that requires all agencies that administer federal health insurance benefits, including Medicare, the Federal Employees Health Benefits Plan, TRICARE for uniformed personnel (Department of Defense), the Indian Health Service (Health and Human Services), and any program administered under the VA, to share price and provider quality data with beneficiaries. The order also calls for similar reporting in non-federal healthcare networks in the future. This initiative, which involves a public/private partnership with agencies that are implementing quality measurement systems, is more than conceptual: 50 of the top 200 employers in the United States and at least one major labor union have signed statements of support for the initiative and the four cornerstones. Many state governments have also signed statements of support or are developing their own executive orders to implement the initiative.

The Transparency Initiative is being implemented in pilot projects in which designated contractors are aggregating and analyzing data on clinical services provided to patients with private insurance or Medicaid and Medicare coverage. Consumers, private insurers, employers, and state governments are working together using clinical performance measures to produce information that will allow beneficiaries to make more informed coverage choices. The first reports will provide expanded information for Medicare beneficiaries on the quality of service provided by Medicare providers.

The CMS have also initiated a Quality Improvement Roadmap Initiative involving hospitals, long-term care facilities, home health agencies, and physicians' offices. As part of this initiative, CMS is working with federal and state entities, accreditation bodies, insurers, and professional societies to achieve consensus on evidence-based measures that have wide acceptability in the healthcare industry. As part of this effort, the VA, the Joint Commission (JC), the National Committee for Quality Assurance (NCQA), the Hospital Quality Alliance (HQA), the Ambulatory Care Quality Alliance (AQA), the National Quality Forum (NQF) and medical specialty societies are all actively involved in developing, endorsing, and disseminating reliable and valid performance measures based on sound clinical evidence. The American Medical Association's Physician Consortium for Performance Improvement (PCPI) is also a major developer of performance measures. With representation from the various specialty professional organizations, the measures are developed by physicians, for physicians. The NQF reviews and endorses measures after a thorough review of the methodology used in the measure development process. The AQA is the implementing arm, reviewing and selecting measures for inclusion in data collection, data aggregation, and data reporting efforts.

In April 2007, CMS issued a letter to State Medicaid Directors announcing a national Value-Driven Health Care (VDHC) initiative that incorporates the four “cornerstones” of the Health Care Transparency Initiative, as well as a new national Medicaid Quality Improvement Program (MQIP).

Although controversy still exists concerning the public reporting of clinical quality measures, there has been a steady increase in such reporting over the last few years, primarily at the hospital level (e.g., state initiatives, Health Quality Alliance [HQA], JC, NCQA's Health Care Effectiveness Data and Information Set [HEDIS]). In addition to providing information to the public, these programs incorporate organizational performance measures related to maintenance of accreditation status. It is hoped that such transparency will incentivize organizations and, in the future, individuals, to assess their adherence to evidence-based clinical practices and implement quality improvement activities if needed.

Payment reform and pay for performance

Reform in clinical payments is a response to the increasing use of valuable, finite clinical resources without evidence that more interventions and more spending will routinely result in better care. While many studies have reported increased medical spending that did not produce better results (27), there is evidence that rising spending for mental health has resulted in improved access to care and overall good value as a result of expenditures for evidence-based care for depression, bipolar disorder, and schizophrenia (28). However, while, on average, mental health spending appears to be purchasing good value, there is still evidence of specific quality deficits, as evidenced by relatively low scores for depression care in the HEDIS, by the RAND Community Quality Index Study, which found that people with depression receive only 57.7% of recommended services, and a 2006 study indicating that one fifth to one fourth of spending on depression has little prospect of helping the patient (28). Thus private payers, employers, the government, and taxpayers are all seeking greater accountability for the money being spent.

Pay for performance (P4P) involves linking physician reimbursement to the provision of quality care. The goal is to provide incentives for clinicians to adhere to evidence-based standards of care and reward clinicians who avoid overuse, underuse, and misuse of critical clinical interventions. The expected outcome of P4P initiatives is a reduction in undesirable variation as a result of providing evidence-based care along with a reduction in total spending as a result of better outcomes (e.g., fewer delays in diagnosis, fewer outpatient visits, reduced hospitalizations, fewer complications). The key to using this still controversial although increasingly widely accepted strategy is the use of valid and reliable clinical measures, feasible data collection methods, and consistent and transparent analysis and reporting mechanisms. In P4P efforts, the performance measurement system and mechanisms come from a predetermined external source (e.g., as in the Quality Improvement Roadmap Initiative from CMS). However, if quality improvement interventions are found to be necessary, it is up to the individual practitioner to identify and implement such activities. The key to clinician success in a P4P system is adoption of quality improvement strategies that result in increased use of key evidence-based clinical interventions (29).

In 2006, Congress authorized CMS to develop a pay-for-performance initiative by 2009. In response, as part of its overall quality improvement efforts, the CMS launched the Physician Voluntary Reporting Program (PVRP); the title of the program was changed to Physician Quality Reporting Initiative (PQRI) in 2007 (30). The PQRI is a first step toward linking Medicare payments to health professionals with quality of care. The goal is to prevent chronic disease complications, avoid preventable hospitalizations, and generally improve the quality of care. When the program was launched in 2006, it incorporated a core set of 16 national consensus performance measures, selected from an overall set of 36 evidence-based clinically valid measures based on practice guidelines endorsed by physicians and medical specialty societies. During an initial period, physicians could participate in voluntary reporting and receive confidential feedback from information captured through the administrative claims system augmented with a set of special codes that provided the numerator data required for particular performance measures. Starting in July 1, 2007, eligible professionals who elected to participate in the still voluntary reporting program by providing data on a designated set of quality measures on claims for dates of service from July 1 to December 31, 2007 could earn a lump-sum bonus payment, subject to a cap of 1.5% of total allowed charges for covered Medicare physician fee schedule services. In November 2007, Congress stated that this process will continue in 2008.

In coordination with the American Medical Association's PCPI, CMS is developing measures that apply to all priority clinical conditions and specialty practices. Currently, healthcare professionals are only required to report measures applicable to services they provide to Medicare beneficiaries. In 2007, only one measure related to psychiatric care was included in the introductory set of measures: whether a patient with acute depression receives a full 12 week trial of antidepressant medication. However, an expanded set of 119 measures for 2008 was posted on November 15, 2007 in the Federal Register and on the U.S. HHS CMS website (www.cms.hhs.gov/PQRI/15_MeasuresCodes.asp#TopOfPage). Thus, in 2008, three additional measures related to depression are now included: whether screening for depression occurs; whether patients meet DSM-IV criteria for major depressive disorder; and whether patients with major depressive disorder are assessed for suicide risk.

To support this public-private quality reporting initiative, a multi-faceted system is being developed that involves 1) organizations that develop performance measures; 2) an umbrella organization that reviews measures against development criteria and endorses those measures that are deemed to meet these criteria and address declared national clinical priorities; 3) organizations that develop, test, and implement models for data collection, aggregation, and reporting of selected endorsed measures; and 4) organizations that will apply the use of these measures and support systems (see Figure 3). The implementation of this system heralds a new era in how health care will operate in the United States.

Figure 3.

Figure 3. Overview of Performance Measurement Infrastructure

Board recertification

In 2001, the American Board of Medical Specialties (ABMS), of which the American Board of Psychiatry and Neurology (ABPN) is a member, approved the transition to a continuous professional development program called Maintenance of Certification (MOC) by the end of 2005. In response to public concern that it was inadequate to rely on a cognitive examination alone to ensure ongoing competence of physicians, the MOC program focuses on six competencies (pa-tient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice), which are incorporated into four component categories used to recertify specialists:

•. 

Part I. Professional Standing (a valid, unrestricted license)

•. 

Part II. Lifelong Learning and Self-Assessment (participation in educational and self-assessment programs)

•. 

Part III. Cognitive Expertise (formal examination)

•. 

Part IV. Practice Performance Assessment.

The Practice Performance Assessment asks specialists to demonstrate that they can assess the quality of care they provide compared with their peers and national benchmarks and that they can, as needed, apply best evidence or consensus recommendations to improve care using follow-up assessment. Practice Performance Assessment is fundamentally a QI and performance measurement approach to ensuring high quality patient care.

The ABPN's MOC Program contains the Part IV component, which is titled Performance in Practice. Beginning in 2012, a phased-in approach will require diplomates of the ABPN to participate in a quality improvement program that includes two modules: 1) a clinical module, involving chart review, and 2) a feedback module, involving patient or peer review. The clinical module will require the clinician to use data obtained from or pertinent to a diplomate's personal clinical practice and evaluate cases in a specific diagnostic category with reference to best practice and/or practice guidelines published in the literature. The diplomate must develop an intervention plan for improving his or her performance, as necessary, and then re-assess data from another sample of cases in the same diagnostic category within 24 months.

MASTERS OF OUR OWN PROFESSION

Clinicians need to take the lead in QI and performance measurement. Otherwise they are vulnerable to challenge from consumers, payers, and political stakeholders and risk losing leadership in their fields. In 1998, the President's Advisory Commission on Consumer Protection and Quality in the Healthcare Industry (www.hcqualitycommission.gov) stated that performance measurement should be a key component in nationwide quality improvement initiatives. Also in 1998, a work group sponsored by the National Institute of Mental Health called for “constructing monitoring tools and systems to assess adherence to guidelines [which are] important for developing the capacity to monitor the quality of routine care.” The work group noted that using performance measures to monitor care can identify gaps between evidence-based practices and care that is actually being delivered and can highlight areas where practice needs to be improved (31).

Unfortunately, efforts to implement performance measures in mental health care have lagged behind. In 2003, the first National Healthcare Quality Reports published by HHS stated that mental illness is a clinical area without “broadly accepted” and “widely used” measures of quality. A follow-up report in 2005 showed no substantial progress. In its 2005 report on mental and substance use disorders, the IOM continued to recommend that clinicians and organizations “use measures of the processes and outcomes of care to continuously improve the quality of care they provide.” Yet a 2007 exploratory study, which investigated the state of quality measurement in mental health at seven academic health centers, found that using measurement to assess organizational or practitioners' adherence to evidence-based clinical processes of care was still “not routine” (32).

Although performance measurement in medical care is still in its very early stages and is complicated by a number of methodological challenges, it is in the best interest of physicians from all clinical specialties to become involved in the early stages of this initiative. Clinicians in our field have the opportunity to serve as “Innovators” and “Early Adopters,” as described in the current national bestseller The Tipping Point (33)—visionaries who set themselves apart by their willingness to participate in a movement before it is perfected and who have a tolerance for ambiguity that allows them to innovate. Such individuals have the opportunity to learn about changes before they become part of the culture or, even more important, to have a share in shaping the changes. In contrast, individuals in the categories of “Early and Late Majorities” miss the opportunity to be “in on the ground floor” and too often have to play catch-up.

While performance measurement is a reality of twenty-first century medical care, the field of psychiatry is still open to “Innovators” and “Early Adopters” in this area. It is to the advantage of mental healthcare professionals to obtain a working knowledge of QI and performance measurement and to participate in activities that allow them to assess the care they provide and implement changes in the process of care when unexplained variation is identified. Clinicians who participate in national demonstration projects will have the opportunity not only to learn more about these initiatives early in their implementation but will also have a chance to provide input into the shaping of these efforts to improve the quality of behavioral health care. As emphasized in this article, it is important that those involved in this endeavor a) focus measures on what is important, not necessarily what is easily measured; b) ensure that there is a strong evidence base for the measures being developed in order to maximize provider buyin; and c) use evidence-based implementation strategies to improve performance.

REFERENCES

1 Institute of Medicine. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: Institute of Medicine; March 1, 2001 (www.iom.edu/?id=34257; can be accessed online free of charge at www.nap.edu/catalog.php?record_id=10027).Google Scholar

2 McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348:2635–45.CrossrefGoogle Scholar

3 VHA Directive 96-052, August 29, 1996:1–2.Google Scholar

4 American Psychiatric Association. APA practice guidelines for the treatment of psychiatric disorders (available at www.psychiatryonline.com/resourceTOC.aspx?resourceID=4).Google Scholar

5 The President's New Freedom Commission on Mental Health. Achieving the promise: Transforming mental health care in America. Final report. Department of Health and Human Services Publication No. SMA-03-3832. Rockville, MD, July 2003 (available at www.mentalhealthcommission.gov/reports/finalreport/fullreport-02.htm)Google Scholar

6 Bauer, MS, A review of quantitative studies of adherence to mental health clinical practice guidelines. Harv Rev Psychiatry 2002; 10:138–53.CrossrefGoogle Scholar

7 Institute of Medicine. Improving the quality of health care for mental and substance-use conditions: Quality chasm series. Washington, DC: Institute of Medicine; November 1, 2005 (www.iom.edu/?id=30858; can be accessed online free of charge at www.nap.edu/catalog.php?record_id=11470#toc).Google Scholar

8 Charbonneau A, Rosen AK, Ash AS, et al. Measuring the quality of depression care in a large integrated health system. Med Care 2003; 41:669–80.Google Scholar

9 Unützer J, Simon G, Pabiniak C, et al. The use of administrative data to assess quality of care for bipolar disorder in a large staff model HMO. Gen Hosp Psychiatry 2000; 22:1–10.CrossrefGoogle Scholar

10 Roy-Byrne PP, Katon W, Cowley DS, et al. A randomized effectiveness trial of collaborative care for patients with panic disorder in primary care. Arch Gen Psychiatry 2001; 58:869–76.CrossrefGoogle Scholar

11 Lehman A. Quality of care in mental health: The case of schizophrenia. Health Aff 1999; 18:52–65.CrossrefGoogle Scholar

12 Wang, P, Demler O, Kessler R. Adequacy of treatment for serious mental illness in the United States. Am J Publ Health 2002; 92:92–8.CrossrefGoogle Scholar

13 Zima BT. Quality of publicly-funded outpatient specialty mental health care for common childhood psychiatric disorders in California. J Am Acad Child Adolesc Psychiatry 2005; 44:130–44.CrossrefGoogle Scholar

14 Solberg LI, Mosser G, McDonald S. The three faces of performance measurement: Improvement, accountability, and research. Jt Comm J Qual Improv 1997; 23:135–47.Google Scholar

15 Osser DN, Patterson RD, Levitt JJ. Guidelines, algorithms, and evidence-based psychopharmacology training for psychiatric residents. Acad Psychiatry 2005; 29:180–6.CrossrefGoogle Scholar

16 Azocar F, Cuffel BD, Goldman W, et al. Dissemination of guidelines for the treatment of major depression in a managed behavioral health care network. Psychiatr Serv 2001, 62:6–7.Google Scholar

17 Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. BMJ 1998; 317; 465–8.CrossrefGoogle Scholar

18 Kilbourne AM, Valenstein M, Bauer MS. The research-to-practice gap in mood disorders: A role for the U.S. Department of Veterans Affairs. J Clin Psychiatry 2007; 68:502–4.CrossrefGoogle Scholar

19 Hoge MA, Jacobs S, Beliltsky R, et al. Graduate education and training for contemporary behavioral health practice. Adm Policy Ment Health 2002;29:335–57.CrossrefGoogle Scholar

20 Pincus HA, Page AEK, Bruss B, et al. Can psychiatry cross the quality chasm? Improving the quality of health care for mental and substance use conditions. Am J Psychiatry 2007; 164:712–9.CrossrefGoogle Scholar

21 Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Inter Med 2007; 146:666–73.CrossrefGoogle Scholar

22 Golden WE, Reducing failure rates: System changes lead to better care. J Ark Med Soc 2003; 99:232–3.Google Scholar

23 Introduction to physician performance measurement sets: Tools developed by physicians for physicians. American Medical Association, October 2001.Google Scholar

24 Jamtvedt G, Young JM, Kristoffersen DT, et al. Audit and feedback: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006 Apr 19;(2): CD000259 (www.cochrane.org/reviews).Google Scholar

25 Persell SD, Baker DW, Weiss KB. ACP medicine CE: XIII performance measurement in clinical practice. 2005 WebMD; September 2005 update (www.acpmedicine.com/abstracts/sam/med0013.htm).Google Scholar

26 U.S. Department of Health and Human Services. Value driven health care. November 2, 2007 (www.hhs.gov/valuedriven).Google Scholar

27 Straube BM. Variations in cost and quality: What is to be done? AHR, NIHCM, RWJF Congressional Briefing, CMS, September 8, 2006 (http://allhealth.org/BriefingMaterials/Straube9–08-2006-395.pdf).Google Scholar

28 Druss BG. Rising mental health costs: What are we getting for our money? Health Aff (Millwood) 2006; 25:614–22.CrossrefGoogle Scholar

29 American College of Physicians. Linking physician payments to quality care. Philadelphia: American College of Physicians Position Paper; 2005 (www.acponline.org/hpp/link_pay.pdf).Google Scholar

30 Centers for Medicare and Medicaid Services. Physician Quality Reporting Initiative (www.cms.hhs.gov/pqri).Google Scholar

31 Hermann, RC. Improving mental healthcare: A guide to measurement-based quality improvement, Washington DC: American Psychiatric Publishing; 2005.Google Scholar

32 Williams T, Cerese J, Cuny J. An exploratory project on the state of quality measures in mental health at academic health centers. Harv Rev Psychiatry 2007; 15:34–42.CrossrefGoogle Scholar

33 Gladwell M. The tipping point: How little things can make a big difference. New York: Back Bay Books; 2002.Google Scholar