Two Discussion Posts

Clinical Orthopaedicsand Related Research®
Clin Orthop Relat Res (2013) 471:3496–3503
DOI 10.1007/s11999-013-3194-1
A Publication of The Association of Bone and Joint Surgeons®
SYMPOSIUM: ABJS CARL T. BRIGHTON WORKSHOP ON OUTCOME MEASURES
Challenges in Outcome Measurement
Clinical Research Perspective
Daniel P. O’Connor PhD, Mark R. Brinker MD
Published online: 25 July 2013
Ó The Association of Bone and Joint Surgeons1 2013
Abstract
Background Comparative effectiveness research evaluates treatments as actually delivered in routine clinical
practice, shifting research focus from efficacy and internal
validity to effectiveness and external validity (‘‘generalizability’’). Such research requires accurate assessments of
the numbers of patients treated and the completeness of
their followup, their clinical outcomes, and the setting in
which their care was delivered. Choosing measures and
methods for clinical outcome research to produce meaningful information that may be used to improve patient care
presents a number of challenges.
Where Are We Now? Orthopaedic surgery research has
many stakeholders, including patients, providers, payers,
and policy makers. A major challenge in orthopaedic
surgery outcome measurement and clinical research is
One of the authors certifies that he (DPO) has received or may receive
payments or benefits, during the study period, an amount of less than
USD 10,000, from Nimbic Systems, Inc (Stafford, TX, USA). One of
the authors certifies that he (MRB) has received or may receive
payments or benefits, during the study period, an amount of less than
USD 10,000, from Biomet, Inc (Warsaw, IN, USA).
All ICMJE Conflict of Interest Forms for authors and Clinical
Orthopaedics and Related Research editors and board members are
on file with the publication and can be viewed on request.
This work was performed at Fondren Orthopedic Group, Texas
Orthopedic Hospital, Houston, TX, USA.
D. P. O’Connor (&)
Department of Health and Human Performance,
University of Houston, 3855 Holman, Garrison 104,
Houston, TX 77204-6015, USA
e-mail: dpoconno@central.uh.edu
M. R. Brinker
Fondren Orthopedic Group, Texas Orthopedic Hospital,
Houston, TX, USA
123
providing all of these users with valid information for their
respective decision making. At present, no plan exists for
capturing data on such a broad scale and scope.
Where Do We Need to Go? Practical challenges include
identifying and obtaining resources for widespread data
collection and merging multiple data sources. Challenges
of study design include sampling to obtain representative
data, timing of data collection in the episode of care, and
minimizing missing data and study dropout.
How Do We Get There? Resource limitations may be
addressed by repurposing existing clinical resources and
capitalizing on technologic advances to increase efficiencies. Increasing use of rigorous, well-designed observational
research designs can provide information that may be unattainable in clinical trials. Such study designs should
incorporate methods to minimize missing data, to sample
multiple providers, facilities, and patients, and to include
evaluation of potential confounding variables to minimize
bias and allow generalization to broad populations.
Introduction
Current emphasis in clinical research is expanding from the
traditional focus on efficacy, which aims to demonstrate a
treatment effect in well-defined populations under controlled conditions, to comparative effectiveness research,
which seeks to compare and quantify the effects of different treatments in typical clinical populations who are
receiving care from community practitioners. According to
the Institute of Medicine: ‘‘The purpose of comparative
effectiveness research is to assist consumers, clinicians,
purchasers, and policy makers to make informed decisions
that will improve health care at both the individual
and population levels’’ [13]. To address this purpose,
Volume 471, Number 11, November 2013
Outcome Measures: Clinical Research
comparative effectiveness research focuses on evaluating
current treatments in populations as actually delivered in
routine clinical practice and identifying characteristics of
patients, providers, and systems that may affect treatment
delivery and outcome [30]. We describe some current
challenges to orthopaedic surgery outcome measurement in
the context of comparative effectiveness research and
potential solutions to these challenges.
Where Are We Now?
Effectiveness studies require accurate and responsive measures, large sample sizes, and collection of detailed data
describing characteristics of the patients and the context of the
episode of care. In addition to efficacy studies and clinical
trials, observational studies are anticipated to have a large role
in informing comparative effectiveness research [30].
Orthopaedic surgery outcome research also has many stakeholders, including patients, providers, researchers, payers,
and other associated industries (pharmacologic, medical
device manufacturers, major employers, academic and
research institutions, etc), as well as policy makers [13]. Each
of these stakeholders approaches outcome data with a different set of questions and purposes (Table 1). A major
challenge in orthopaedic surgery outcome measurement
across the required breadth of clinical research is providing all
consumers with valid information for decision making [20].
This information will need to be collected consistently
from orthopaedic surgeons who are practicing in a wide
variety of settings (eg, private or group practice, hospitalbased, medical school) and environments (eg, Level 1
trauma center, community hospital, outpatient surgery
3497
center) [8]. At present, no comprehensive plan exists that
allows for capturing data on such a wide scale and scope.
Where Do We Need to Go?
Practical Barriers: How Are We Going to Collect These
Data?
Practical barriers affect our ability to obtain accurate,
comprehensive, and meaningful data across the broad
spectrum of populations and practice variations required
for comparative effectiveness research. Ideally, measures
would be standardized to allow for aggregation and comparison of data across providers, settings, and systems. In
addition, data collection should capture the structure and
multiple aspects of health care, as well as the context of the
complex policies, systems, and environments in which the
practice of orthopaedic surgery occurs [3]. Barriers
impeding these goals include resource constraints and
issues related to data collection and management.
Currently, outcome measures are not routinely used by
community practitioners in clinical decision making and
the delivery of care. This makes it challenging to obtain
such data during routine clinical practice without introducing substantial burden and costs. Administration of a
clinical practice focuses on delivery of patient care and
does not typically have research as a primary focus or
objective. Consequently, allocating resources for researchspecific activity is a major challenge for data collection
during routine clinical practice. Availability of equipment,
supplies, floor space, and dedicated staff time and effort for
research activity and data collection may be limited and
Table 1. Examples of stakeholder perspectives on clinical outcomes data
Stakeholder
Questions
Uses for clinical outcome data
Examples of meaningful
clinical outcome data
Patients
What should be done for me?
Identify best available treatment
(and best provider) for their
condition
Return to usual activities
Quality of life and well-being
Inform expectations
Providers
Payers
Policy makers
What should I be doing for this
patient and for my patients in
general?
Identify best available treatment
for their patients
Technical treatment success
Improve practice quality and
proficiency
Case mix information
What should be done for (and by)
everyone in general?
Identify best treatments for many
conditions across many patients
and providers
Patient-reported outcomes
What is worth doing, and how do
we get it to people who need it?
Identify best practices and
standards for the population
Costs
Patient-reported outcomes
Case mix information
Costs
Preference/utility measures
Evaluate effectiveness across
medical conditions
123
3498
O’Connor and Brinker
competing with the demands of providing efficient patient
care.
Furthermore, collection of outcome data and other
meaningful information for research purposes often
involves abstracting multiple clinical and practice administration databases, as well as systems for finance and
billing, scheduling, and various other clinic and hospital
records. These databases and systems often lack standard
content, structure, or format across providers, facilities, and
systems, introducing substantial barriers to aggregating
data for larger-scale analyses. The data in these systems are
often not constructed for research-related data abstraction
and may be insufficient or incomplete for research purposes, making retrieval tedious, error-prone, and costly
[37, 38]. For example, many of these systems require significant reprogramming to create a single, detailed data
matrix of variables in columns indexed by patients in rows,
as is typically needed for statistical analyses.
Design Issues: Where and When Are We Going
to Collect These Data?
Comparative effectiveness research requires data and study
designs that measure and evaluate patients and circumstances encountered in routine clinical practice. This focus
on external validity is a key feature of effectiveness
research, which differs from the more classic clinical trial
focus on internal validity to attribute effects to treatment.
To increase external validity, longitudinal data ideally
would be collected from samples of patients from many
hospitals, practices, and surgeons who are also sampled to
represent current practice variations.
An example of such a project is Function and Outcome
Research for Comparative Effectiveness in TJR (FORCETJR), which began enrolling patients in 2012 [8]. FORCETJR is a consortium of orthopaedic surgery practices in the
United States, including high-volume academic departments and medical centers, as well as individual and group
practices. FORCE-TJR collects patient-centered outcome
measures before surgery and annually after surgery but also
includes standardized measures of complications and
adverse events, patient risk characteristics, surgeons and
their practices, and clinical examination data [8]. (Editor’s
note: For more information on FORCE-TJR, please see the
articles by Ayers et al. [2] and Franklin et al. [9], also in
this issue of Clinical Orthopaedics and Related Research.)
Also, time to recovery varies depending on the diagnosis, treatment, and the defined end point or outcome, so a
study design issue arises regarding the number and length
of the data collection intervals and the minimum time after
treatment required to evaluate each particular outcome.
Such complex data collection designs require multisite
123
Clinical Orthopaedics and Related Research1
collaborations, larger sample sizes, additional data collection, and appropriate statistical methods that incorporate
such design elements into the analytical model [6, 22, 24,
25, 36].
Contextual Factors: What (Else) Are We Going
to Collect?
Contextual factors, which are characteristics of the providers, facilities, and systems associated with an episode of
care, are often ignored in clinical outcome research and
efficacy trials [3]. Randomization and experimental control
eliminate or balance variability of these contextual factors
to minimize their influence on the analyses, since they are
considered a source of bias rather than factors of interest.
By contrast, clinical effectiveness research aims to
determine how characteristics of the providers, facilities,
and systems affect treatment outcome. Consequently, data
on those characteristics would also need to be collected,
which compounds the data collection task and associated
costs considerably. Ignoring these contextual factors in
observational studies, including analyses of registry data,
confounds interpretation, introduces bias in estimates of the
treatment effect, and limits the generalizability and applicability of results.
Incorporating contextual information into outcome
research data collection, reports, and databases will
improve the ability to identify best practices based on
individual patient characteristics, as well as provider,
facility, and system-level factors. Ideally, data collection
during clinical research would capture variations in population, setting, and system characteristics that may be
associated with delivery or outcome of treatment (Table 2).
How Do We Get There?
Practical Barriers: Rethink Current Clinical Processes
Practical challenges to wide-scale outcome measurement
exist, but with the continuing trend toward electronic
medical records and interconnected administrative systems
[27, 37, 38], these issues are ultimately unlikely to be the
limiting factor. Some recent systems have been developed
and implemented using standard measures and connecting
across different data sources (eg, medical records and
financial databases) [8, 10]. These systems integrate the
collection of research-oriented data into the routine clinical
workflow.
In the near term, resource limitations may be addressed
by repurposing and redesigning the existing resources to
Volume 471, Number 11, November 2013
Table 2. Examples of contextual data at multiple levels of the
healthcare environment
Outcome Measures: Clinical Research
3499
Table 2. continued
Level
Contextual data
Level
Contextual data
Patient
Demographics
Disease prevalence
Age
Laws, regulations, and policies
Sex
Local
Family status
Regional
Employment status and occupation
State
Population and policy
Regional demographics
Household income
Federal
Diagnosis
Environmental factors
Stage of disease
Urban
Acuity
Rural
Prognosis
Major employers and industry
Health-related risk factors and behaviors
Community health programs
Smoking
Infrastructure support and funding
Drug use
Obesity status
Activity
Diet
Comorbidities
Comorbid injury or acute illness
Chronic disease
Social and cultural factors
Personal expectations for health care
Family/social roles and expectations
Cultural influences and expectations
Provider (surgeon)
Experience
Practice size
Case volume
Case mix
Practice setting
Academic medical center
Private practice
Healthcare industry
Hospital and surgical systems
Trauma center
Community hospital
Specialty surgery center
Delivery systems
Available facilities
Available services
Proximity of facilities and services
Referral relationships
Industry
Medical supplies
Surgical implants and disposables
Technology and equipment
Information systems
Payers and payment systems
Private
Public
Medicare
Workers Compensation
increase efficiencies. For example, clinic flow, staff, patient
wait time, and clinic space may be reorganized to include
outcome measurement as the patient moves through the
clinic [8, 10]. In our clinic, patients move in stages from
the waiting area to an automated touchscreen data collection system before going to the examination room. The
instruments presented to the patient depend on the limb or
joint. The data collection takes 15 to 20 minutes, which
was the average time that patients previously spent in the
waiting area. The touchscreen system requires little monitoring by clinic staff, and the delivery of care is
uninterrupted.
There are a number of available and developing information technologies and methods that also may aid in data
collection, such as computer-adaptive testing [7, 12, 28]. In
computer-adaptive testing, patients are presented only
those survey items that most closely match the severity of
their condition. Since those items represent a subset of the
total survey, reliable data can be collected more efficiently
and with less patient burden. Capitalizing on such methods
may automate and streamline data collection and provide
for instantly available scoring, tabulating, reporting, and
storing of data. Inclusion of information services staff or
service as a vested partner in the design of the system,
including the linking of multiple data systems, can lead to
creative solutions and additional efficiencies, such as
avoiding duplicate data collection or data entry, incorporating system redesigns into regular system maintenance,
and ensuring system functions and outputs match the needs
of the end user, the clinician-researcher. Our group practice
has been using such a system since the late 1990s, and
several recent publications describe similar systems that
demonstrate the feasibility of incorporating outcome
measurement into clinical practice [8, 10, 27].
123
3500
O’Connor and Brinker
Design Issues: Use Complex Data Collection
to Capture Complex Effects
In addition to the common ‘‘pre-post’’ clinical research
design, recovery may also be modeled as a function of
time, which requires three or more data points to create a
time course or trajectory for each patient [15]. For example, a study objective may be to determine which of two
procedures results in a quicker recovery. Figure 1 shows
the results of a fictional study in which the final outcome
for both treatments is at the same functional level, but the
trajectories for each group show that this level occurs
sooner in the ‘‘new’’ treatment (Week 10) compared to the
‘‘standard’’ treatment (Week 16). Patient characteristics
and other variables measuring characteristics of the clinicians (eg, experience) and practice or treatment variations
(eg, physical therapy duration) may be tested in such a
model to determine how they affect the recovery trajectory
[15].
Practice registries will likely have a large role in
informing comparative effectiveness research but require
careful design to maximize their utility. Many existing
registries track clinical outcomes such as infection, reoperation rates, and implant failure but have less focus on
patient-centered outcomes [8]. Investigation of individuallevel variation and determinants of outcomes requires
information beyond the set of demographic variables most
often captured in registries (Table 2). In combination with
cost and utilization data, explaining the individualized time
course of functional recovery can provide powerful
Fig. 1 For this fictional study, the recovery trajectory of the ‘‘new’’
treatment differs from the ‘‘standard’’ treatment, although both have
the same final effect on outcome scores in terms of preoperative-topostoperative (Post Pre) change. Evaluating recovery trajectories
requires additional data collection at more time points but may be
important when comparing the effectiveness of treatments and the
respective societal costs, such as the patient’s time away from work.
123
Clinical Orthopaedics and Related Research1
information regarding which of several treatments is both
effective and efficient, a topic currently of tremendous
interest to payers and policy makers.
While randomized clinical trials will remain a mainstay
for producing valid medical evidence, rigorous, welldesigned observational research designs and methods (case
series, case-control studies, cohort studies, etc) are also
needed to inform comparative effectiveness research.
Guidelines for design and reporting of quality observational trials are available [33, 35]. One approach to
obtaining representative data in observational studies is by
sampling at multiple levels of the healthcare system. While
sampling patients is very common in clinical research,
sampling of providers and facilities is less common,
although some recent examples do exist [8]. Sampling of
payer organizations or large healthcare systems is rare. The
process of patient care is affected by the different settings,
environments, and policies in which it is delivered, so
study designs should consider incorporating measures of
these factors when possible. These studies should also
include measures of potential confounders, which are
variables related to both predictors and outcomes that may
increase or decrease the observed relation between the two.
Ignoring confounders may result in large bias and spurious
findings [4]. When data on these variables are available, an
appropriate analytical approach should be used to obtain
unbiased results [1, 4].
A challenge for any longitudinal medical research is
dealing with missing data and patient dropout [16, 17, 23,
26]. Current recommendations are to incorporate plans for
minimizing missing data and dropout at the design stage. A
2010 National Research Council monograph and several
recent summaries of that work [16, 17, 23, 26] contain a
number of strategies for limiting missing data, including
limiting data collection burden for patients and staff (eg,
automated and Web-based response systems [19, 31]),
closely monitoring data collection during the active study
period, increasing incentive amounts for longer followup,
identifying patients at high risk for dropout, offering access
to other treatments after the study period (if desired), and
recording patients’ reasons for withdrawal. These reports
also provide recommendations for appropriate statistical
strategies and techniques for analyses in the presence of
missing data. Some strategies for locating patients who fail
to return for followup include using the Internet and other
public data sources such as motor vehicle registration
records and credit bureau databases [5, 14, 18, 29]. These
methods have varying success and often require crossreferencing multiple such resources. Extensive time, effort,
and money are required to locate patients long after conclusion of active treatment or dropout, so incorporating
methods and resources to minimize dropout during study
planning is likely to be a good investment.
Volume 471, Number 11, November 2013
Contextual Factors: Capture Variance to Explain
Variance
Attaining meaningful and robust contextual data requires
collection protocols that focus on gathering information by
sampling or targeting multiple units (eg, patients, providers, etc) at each of the multiple levels of the healthcare
environment (eg, clinics, hospitals, systems, geopolitical
regions). The goal would be to obtain a complex data
structure that represents the complex nature of the
respective communities and healthcare systems. This
complex structure could be used to investigate how factors
at all levels interact to produce an outcome in a patient. To
the extent that the contextual factors are actionable or
alterable, evidence-based recommendations for changes in
the delivery and organization of health care may be generated. For example, Martin et al. [21] used data from more
than 6000 patients in a state database of nonfederal hospitals that included characteristics of the patients, surgeons,
and hospitals to investigate 90-day reoperation and complication rates after lumbar fusion surgery. While
complication and reoperation rates varied between hospitals, a substantial amount of the variability was attributable
to differences in surgeons’ practices even after accounting
for patient-level factors (eg, age, comorbidities, diagnosis,
severity) [21]. This type of multilevel analysis provides
results that may identify potential changes in service
delivery to decrease complications and improve outcome.
Discussion
The current shift in focus of orthopaedic surgery outcome
research from efficacy toward effectiveness requires a
corresponding shift in the type and extent of data needed.
The shift in data requirements presents a number of challenges for clinical researchers. To become common in
general clinical practice, outcome measures have to add
value to the healthcare process and experience for all
parties [32]. Development of a set of well-defined purposes
and questions to address the values and needs of the outcome data stakeholders would serve to set the clinical
research agenda for the upcoming decade [34]. A strong
research agenda is needed that outlines priorities to guide
decisions for obtaining consistent and comprehensive
orthopaedic surgery outcome data.
One good strategy for development of the research
agenda would be to involve advocacy groups (eg, Arthritis
Foundation), provider institutions (eg, large hospital systems, professional associations), payers, industry, and
policy makers directly in the process to consider including
them in certain aspects of collection, analysis, and
Outcome Measures: Clinical Research
3501
interpretation of outcomes and associated data. This strategy was used by the Institute of Medicine when assembling
its report regarding initial priorities for comparative
effectiveness research in the United States [13]. A collaborative approach including stakeholders as partners from
all sectors and levels may be the most effective way to
develop a plan and strategy.
Practical barriers to more comprehensive outcome data
collection appear substantial but are not insurmountable if
the data have value for everyone involved. Keys to overcoming practical barriers are to consider redesigning
current systems and processes. A series of small changes in
elements of time, space, and tasks related to clinic flow and
capitalizing on evolving technologic advances to increase
efficiency can be sufficient to facilitate data collection.
Study planning and data collection protocols need to consider the temporality of the condition and recovery, as well
as relevant environmental and systemic influences, and
include plans to minimize dropout and other causes of
missing data.
Use of rigorous design methods and appropriate analytical methods can mitigate many of the limitations
inherent in observational research to inform comparative
effectiveness research. Such research designs capture
‘‘actual practices’’ in the community and therefore can
have high external validity [11]. As with any type of
research, observational studies require careful attention to
design, data collection, and analyses to obtain valid conclusions. Care should be taken to ensure that the patients,
providers, and systems included in the study are representative of the respective populations to allow
generalization to a defined population. Identifying factors
affecting individual recovery of function and quality of
life, particularly when combined with cost and utilization
data, can provide powerful information regarding which
treatment is both effective and efficient, a topic currently of
tremendous interest to payers and policy makers.
Health care is a complex system, so data collection
should capture important aspects of the complex structure.
Contextual information about the episode of care aids
interpretation and attribution of variability in outcome.
Attention should be given to providing meaningful and
adequately detailed descriptions of patients, providers,
facilities, and systems. The resulting data would provide
more complete evidence that may be used to compare
treatment options, describe risk-adjusted outcome, explain
heterogeneity in treatment response, and inform clinical
care and policy-level decisions. Comprehensive outcome
data collection on a broad scale can be used to identify
optimal treatment choices for specific populations and
subpopulations and improve the quality of orthopaedic
health care.
123
3502
O’Connor and Brinker
References
1. Ahn H, Court-Brown CM, McQueen MM, Schemitsch EH. The
use of hospital registries in orthopaedic surgery. J Bone Joint
Surg Am. 2009;91(suppl 3):68–72.
2. Ayers DC, Zheng H, Franklin PD. Integrating Patient-reported
Outcomes Into Orthopaedic Clinical Practice: Proof of Concept
From FORCE-TJR. Clin Orthop Relat Res. 2013. DOI 10.1007/
s11999-013-3143-z.
3. Barrack RL. The results of TKA: what the registries don’t tell us.
Orthopedics. 2011;34:e485–e487.
4. Bryant DM, Willits K, Hanson BP. Principles of designing a
cohort study in orthopaedics. J Bone Joint Surg Am. 2009;91(suppl 3):
10–14.
5. Cadarette SM, Dickson L, Gignac MA, Beaton DE, Jaglal SB,
Hawker GA. Predictors of locating women six to eight years after
contact: internet resources at recruitment may help to improve
response rates in longitudinal research. BMC Med Res Methodol.
2007;7:22.
6. Cook JA, Bruckner T, MacLennan GS, Seiler CM. Clustering in
surgical trials—database of intracluster correlations. Trials.
2012;13:2.
7. Cook KF, Roddey TS, O’Malley KJ, Gartsman GM. Development of a Flexilevel Scale for use with computer-adaptive testing
for assessing shoulder function. J Shoulder Elbow Surg. 2005;14:
90S–94S.
8. Franklin PD, Allison JJ, Ayers DC. Beyond joint implant registries: a patient-centered research consortium for comparative
effectiveness in total joint replacement. JAMA. 2012;308:1217–
1218.
9. Franklin PD, Harrold L, Ayers DC. Incorporating Patient-reported Outcomes in Total Joint Arthroplasty Registries: Challenges
and Opportunities. Clin Orthop Relat Res. 2013. DOI 10.1007/
s11999-013-3193-2.
10. Goldstein J. Private practice outcomes: validated outcomes data
collection in private practice. Clin Orthop Relat Res. 2010;468:
2640–2645.
11. Hoppe DJ, Schemitsch EH, Morshed S, Tornetta P 3rd, Bhandari
M. Hierarchy of evidence: where observational studies fit in and
why we need them. J Bone Joint Surg Am. 2009;91(suppl 3):
2–9.
12. Hung M, Clegg DO, Greene T, Weir C, Saltzman CL. A lower
extremity physical function computerized adaptive testing
instrument for orthopaedic patients. Foot Ankle Int. 2012;33:
326–335.
13. Institute of Medicine. Initial National Priorities for Comparative
Effectiveness Research. Washington, DC: The National Academies Press; 2009.
14. King PJ, Malin AS, Scott RD, Thornhill TS. The fate of patients
not returning for follow-up five years after total knee arthroplasty.
J Bone Joint Surg Am. 2004;86:897–901.
15. Kozlowski AJ, Pretz CR, Dams-O’Connor K, Kreider S, Whiteneck G. Applying individual growth curve models to evaluate
change in rehabilitation: a National Institute on Disability and
Rehabilitation Research Traumatic Brain Injury Model Systems
Report. Arch Phys Med Rehabil. 2012;94:589–596.
16. Little RJ, Cohen ML, Dickersin K, Emerson SS, Farrar JT,
Neaton JD, Shih W, Siegel JP, Stern H. The design and conduct
of clinical trials to limit missing data. Stat Med. 2012;31:3433–
3443.
17. Little RJ, D’Agostino R, Cohen ML, Dickersin K, Emerson
SS, Farrar JT, Frangakis C, Hogan JW, Molenberghs G,
Murphy SA, Neaton JD, Rotnitzky A, Scharfstein D, Shih
WJ, Siegel JP, Stern H. The prevention and treatment of
123
Clinical Orthopaedics and Related Research1
missing data in clinical trials. N Engl J Med. 2012;367:
1355–1360.
18. Louie DL, Earp BE, Blazar PE. Finding orthopedic patients lost
to follow-up for long-term outcomes research using the Internet:
an update for 2012. Orthopedics. 2012;35:595–599.
19. Lubowitz JH, Smith PA. Current concepts in clinical research:
web-based, automated, arthroscopic surgery prospective database
registry. Arthroscopy. 2012;28:425–428.
20. Marjoua Y, Butler CA, Bozic KJ. Public reporting of cost and
quality information in orthopaedics. Clin Orthop Relat Res.
2012;470:1017–1026.
21. Martin BI, Mirza SK, Franklin GM, Lurie JD, MacKenzie TA,
Deyo RA. Hospital and surgeon variation in complications and
repeat surgery following incident lumbar fusion for common
degenerative diagnoses. Health Serv Res. 2013;48:1–25.
22. Morshed S, Tornetta P 3rd, Bhandari M. Analysis of observational studies: a guide to understanding statistical methods.
J Bone Joint Surg Am. 2009;91(suppl 3):50–60.
23. National Research Council. The Prevention and Treatment of
Missing Data in Clinical Trials. Panel on Handling Missing Data
in Clinical Trials. Committee on National Statistics, Division of
Behavioral and Social Sciences and Education. Washington, DC:
The National Academies Press; 2010.
24. Randsborg PH, Sivertsen EA, Skramm I, Saltyt Benth Jr, Gulbrandsen P. The need for better analysis of observational studies
in orthopedics: a retrospective study of elbow fractures in children. Acta Orthop. 2010;81:377–381.
25. Roberts C, Roberts SA. Design and analysis of clinical trials
with clustering effects due to treatment. Clin Trials. 2005;2:152–
162.
26. Scharfstein DO, Hogan J, Herman A. On the prevention and
analysis of missing data in randomized clinical trials: the state of
the art. J Bone Joint Surg Am. 2012;94(suppl 1):80–84.
27. Shah J, Rajgor D, Pradhan S, McCready M, Zaveri A, Pietrobon
R. Electronic data capture for registries and clinical trials in
orthopaedic surgery: open source versus commercial systems.
Clin Orthop Relat Res. 2010;468:2664–2671.
28. Siebens H, Andres PL, Pengsheng N, Coster WJ, Haley SM.
Measuring physical function in patients with complex medical
and postsurgical conditions: a computer adaptive approach. Am J
Phys Med Rehabil. 2005;84:741–748.
29. Smith JS, Watts HG. Methods for locating missing patients for
the purpose of long-term clinical studies. J Bone Joint Surg Am.
1998;80:431–438.
30. Sox HC, Goodman SN. The methods of comparative effectiveness research. Annu Rev Public Health. 2012;33:425–445.
31. Stewart JI, Moyle S, Criner GJ, Wilson C, Tanner R, Bowler RP,
Crapo JD, Zeldin RK, Make BJ, Regan EA, For The COPDGene
Investigators. Automated telecommunication to obtain longitudinal follow-up in a multicenter cross-sectional COPD study.
COPD. 2012;9:466–472.
32. Swiontkowski MF, Buckwalter JA, Keller RB, Haralson R. The
outcomes movement in orthopaedic surgery: where we are and
where we should go. J Bone Joint Surg Am. 1999;81:732–740.
33. Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC,
Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M;
STROBE Initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and
elaboration. Ann Intern Med. 2007;147:W163–W194.
34. VanLare JM, Conway PH, Sox HC. Five next steps for a new
national program for comparative-effectiveness research. N Engl
J Med. 2010;362:970–973.
35. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC,
Vandenbroucke JP; STROBE Initiative. The Strengthening the
Volume 471, Number 11, November 2013
Reporting of Observational Studies in Epidemiology (STROBE)
statement: guidelines for reporting observational studies. Ann
Intern Med. 2007;147:573–577.
36. Vorhies JS, Wang Y, Herndon JH, Maloney WJ, Huddleston JI.
Decreased length of stay after TKA is not associated with
increased readmission rates in a national Medicare sample. Clin
Orthop Relat Res. 2012;470:166–171.
Outcome Measures: Clinical Research
3503
37. Weiskopf NG, Weng C. Methods and dimensions of electronic
health record data quality assessment: enabling reuse for clinical
research. J Am Med Inform Assoc. 2013;20:144–151.
38. Weng C, Appelbaum P, Hripcsak G, Kronish I, Busacca L,
Davidson KW, Bigger JT. Using EHRs to integrate research with
patient care: promises and challenges. J Am Med Inform Assoc.
2012;19:684–687.
123
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.
Hey Tutor,
The below contains instructions for two different
discussion posts. You will submit them on or before
Wednesday. Hence you have two different word
documents you will be submitting. Each word
document should have single-spaced on one page.
Please title the two jobs according to their right
titles.
Discussion 1 Topic 1: Emerging Technologies in
Health Care
Today’s health care settings rely more and more on technology than ever before.
Programs such as meaningful use guidelines and other regulatory requirements
are pushing health care organizations to improve their use of electronic health
records, better track and store data, and share information. Thus, managers find
themselves challenged to efficiently and effectively meet this digital demand.
However, emerging technologies are there to help organizations.
For this discussion, research and discuss with your peers some of the emerging
technologies in health care that are used for information sharing and other data
analytics.
Discussion 2 Topic 2: Decision Making and Risk
Management
Consider and discuss what emerging technologies help organizations with
performance measurement, decision making, and risk management. Be sure to
consider what databases are available to large health care organizations to help
benchmark themselves against their peers.
In addition to the resources I provided, please read on Emerging Technologies to
help you with the solutions for the two discussion topics above.
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
RESEARCH ARTICLE
Open Access
Performance indicators for public mental
healthcare: a systematic international inventory
Steve Lauriks1,2*, Marcel CA Buster1, Matty AS de Wit1, Onyebuchi A Arah3 and Niek S Klazinga2
Abstract
Background: The development and use of performance indicators (PI) in the field of public mental health care
(PMHC) has increased rapidly in the last decade. To gain insight in the current state of PI for PMHC in nations and
regions around the world, we conducted a structured review of publications in scientific peer-reviewed journals
supplemented by a systematic inventory of PI published in policy documents by (non-) governmental
organizations.
Methods: Publications on PI for PMHC were identified through database- and internet searches. Final selection was
based on review of the full content of the publications. Publications were ordered by nation or region and
chronologically. Individual PI were classified by development method, assessment level, care domain, performance
dimension, diagnostic focus, and data source. Finally, the evidence on feasibility, data reliability, and content-,
criterion-, and construct validity of the PI was evaluated.
Results: A total of 106 publications were included in the sample. The majority of the publications (n = 65) were
peer-reviewed journal articles and 66 publications specifically dealt with performance of PMHC in the United States.
The objectives of performance measurement vary widely from internal quality improvement to increasing
transparency and accountability. The characteristics of 1480 unique PI were assessed. The majority of PI is based on
stakeholder opinion, assesses care processes, is not specific to any diagnostic group, and utilizes administrative data
sources. The targeted quality dimensions varied widely across and within nations depending on local professional
or political definitions and interests. For all PI some evidence for the content validity and feasibility has been
established. Data reliability, criterion- and construct validity have rarely been assessed. Only 18 publications on
criterion validity were included. These show significant associations in the expected direction on the majority of PI,
but mixed results on a noteworthy number of others.
Conclusions: PI have been developed for a broad range of care levels, domains, and quality dimensions of PMHC.
To ensure their usefulness for the measurement of PMHC performance and advancement of transparency,
accountability and quality improvement in PMHC, future research should focus on assessment of the psychometric
properties of PI.
Background
Public mental healthcare (PMHC) systems are responsible for the protection of health and wellbeing of a community, and the provision of essential human services to
address these public health issues [1,2]. The PMHC-system operates on three distinct levels of intervention. At
a population-level, PMHC-services promote wellbeing of
* Correspondence: slauriks@ggd.amsterdam.nl
1
Department of Epidemiology, Documentation and Health Promotion EDG,
Municipal Health Service Amsterdam, Nieuwe Achtergracht 100, 1018 WT
Amsterdam, The Netherlands
Full list of author information is available at the end of the article
the total population within a catchment area. At a risk
group-level, PMHC-services are concerned with the prevention of psychosocial deterioration in specific subgroups subject to risk-factors such as long-term
unemployment, social isolation, and psychiatric disorders. Finally, at an individual care-level, PMHC-services
provide care and support for individuals with severe and
complex psychosocial problems who are characterized
either by not actively seeking help for their psychiatric
or psychosocial problems, or by not having their health
needs met by private (regular) health care services [3].
However, a service developed or initially financed with
© 2012 Lauriks et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
public means, as a reaction to an identified hiatus in the
private health care system, may eventually be incorporated in the private health care system. The dynamics of
this relation between the public and private mental
health care systems are determined locally by variations
in the population, type and number of health care providers, and the available public means. Thus, the specific
services provided by the PMHC system at any moment
in time differs between nations, regions, or even
municipalities.
At the individual care-level, four specific functions of
PMHC can be identified [4]. 1) guided referral, which
includes signaling and reporting (multi-) problem situations, making contact with the client, screening to clarify
care-needs, and executing a plan to guide the client to
care, 2) coordination and management of multi-dimensional care provided to persons that present with complex clinical conditions, ensuring cooperation and
information-exchange between providers (e.g. mental
health-, addiction-, housing- and social services), 3)
develop and provide treatment that is not provided by
private healthcare organizations, often by funding private healthcare organizations to provide services for specific conditions (e.g. early psychosis intervention
services, or methadone maintenance services), and 4)
monitoring trends in the target group.
Accountability for services and supports delivered, and
funding received, is becoming a key component in the
public mental health system. As part of a health system,
each organization is not only accountable for their own
services, but has some responsibility for the functioning
of the system as a whole as well [5]. International
healthcare organizations, as well as national and regional
policymakers are developing performance indicators (PI)
to measure and benchmark the performance of health
care systems as a precondition for evidence-based health
policy reforms. [e.g. [6-11]]. Many organizations have
initiated the development and implementation of quality
assessment strategies in PMHC. However, a detailed
overview of PI for PMHC is lacking.
To provide an overview of the current state of PI for
PMHC we conducted a structured review of publications in scientific peer-reviewed journals supplemented
by a systematic inventory of PI published in policy
documents and reports by (non-) governmental organizations (so-called ‘grey literature’). First, the different
initiatives on performance measurement in PMHC-systems and services were explored. Second, the unique PI
were categorized according to their characteristics
including domain of care (i.e. structure, process or outcome), dimension of quality (e.g. effectiveness, continuity, and accessibility), and method of development (e.g.
expert opinion, or application of existing instruments).
Finally, we assessed the evidence on the reliability and
Page 2 of 26
validity of these performance measures as indicators of
quality for public mental healthcare.
Methods
Publications reporting on PI for PMHC were identified
through database- and internet searches. Ovid Medline,
PsychInfo, CINAHL and Google (scholar) searches were
conducted using any one of the following terms and/or
mesh headings, on (aspects of) PMHC: ‘mental health
system’, ‘public health system’, ‘mental health services’,’public health services’, ‘mental health care’, ‘public
health care’, ‘state medicine’, ‘mental disorders’, ‘addiction’, ‘substance abuse’, ‘homeless’, and ‘domestic violence’; combined with any one of the following terms/
mesh headings on performance measurement: ‘quality
indicator’,’quality measure’, performance indicator’, ‘performance measure’, and ‘benchmarking’.
Database searches were limited to literature published
in the period between 1948 and 2010; Google search
was conducted in October 2009. Included websites were
revisited in February 2011 to check for updates. Publications had to be in the English or Dutch language to be
included. Studies, reports and websites were included
for further review if a focus on quality measurement of
healthcare services related to PMHC became apparent
form title, header, or keywords. Abstracts and executive
summaries were reviewed to exclude publications on
somatic care; elderly care; children’s healthcare; and
healthcare education. Final selection was based on
review of the full content, excluding publications that
did not specify the measures applied to assess health
care performance. Reference lists of the included publications were reviewed to assure all relevant publications
were included in the final sample. Generally, all publically funded services aimed at the preservation, maintenance, improvement of the mental and social health of
an adult population, risk-group or individual were considered part of the PMHC system. However, publications on PI designed for private mental health care were
included when these PI were applied, or referred to, in
publications on PMHC quality assessment.
Included publications were ordered by nation or
region. Publications from the same nation were ordered
chronologically. Subsequently, we assessed the objective
of the publication, the designation of the proposed PI
(-set) or quality framework, and the purpose of the proposed PI (-set) or quality framework.
The individual PI were then classified by the following
characteristics: a) method of development; b) level of
assessment; c) domains of care as proposed by Donabedian [12]; d) dimensions of performance; e) focus on
specific diagnosis or conditions; and f) data source. In
some cases, the care domain, and/or dimension of performance were not explicitly reported in the publication.
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
The missing domain or dimension was then specified by
the author based on: 1) commonly used dimensions in
that region as described by Arah et al. [13]; 2) purpose
and perspective of the quality framework; and 3) similar
PI from other publications for which a domain and/or
dimension was specified.
Finally, evidence on the feasibility, data reliability and
validity of the included PI was reviewed. Feasibility of PI
refers to the possibility that an indicator can be implemented in the PMHC-system or service, given the current information-infrastructure and support by the field.
Data reliability refers to the accuracy and completeness
of data, given the intended purposes for use [14]. Three
forms of validity are distinguished: a) Content-related
validity, which refers to evidence that an indicator covers important aspects of the quality of PMHC. b) Criterion-related validity, which refers to evidence that an
indicator is related to some external criterion that is
observable, measurable, and taken as valid on its face. c)
Construct-related validity, which refers to evidence that
an indicator measures the theoretical construct of quality and/or performance of PMHC [15,16].
Results
Publications on PMHC quality measurement
The library-database and internet search resulted in
3193 publications in English- and Dutch- language peerreviewed journals and websites from governmental as
well as nongovernmental organizations. Further selection based on title- and keyword criteria resulted in the
inclusion of approximately 480 publications. After
reviewing the abstracts, 152 publications on quality
measurement in adult (public) mental health care were
included. Final selection based on full publication content resulted in the exclusion of another 46 publications
that did not explicitly specify the measures applied to
assess health care performance, leaving 106 publications
to be included in the final sample.
Table 1 shows the included publications structured by
nation/region and date of (first) publication.
Publications on indicator development, implementation, and validation within ten nations were found.
Three international organizations (i.e. European Union,
OECD, and WHO) developed PI for between nation
comparisons. The majority of the publications (n = 90,
85%) focus on the quality of PMHC in nations where
English is the native language (Australia, Canada, United
Kingdom, and USA), and 66 publications (61%) are concerned with PMHC in the United States. In contrast,
publications that focus on the measurement of PMHC
quality in Spain, Germany, Italy, South Africa, the Netherlands, and Singapore together only account for 12% of
the total sample. The majority of the publications were
found in peer-reviewed journals (n = 65; 61%), the
Page 3 of 26
remaining publications (n = 41; 39%) consisted of
reports, bulletins, and websites by governmental and
non-governmental organizations. In the next sections,
the performance measurement initiatives and publications per nation/region are discussed.
United States
In the United States, essential public (mental) health
care services are jointly funded by the federal Department of Health and Human Services (DHHS) and state
governments. Services are provided by state and local
agencies and at a federal level administered by eleven
DHHS-divisions, which include the Center for Disease
Control and Prevention (CDC), the Agency for Healthcare Research and Quality (AHRQ) and the Substance
Abuse and Mental Health Service Administration
(SAMHSA) [1].
A considerable number of initiatives on performance
measurement of the public mental healthcare in the
United States at national, state, local, and service level
were found. In the 1990s, the growth of managed care
delivery systems in behavioral health raised the need for
quality assurance and accountability instruments, and
led to an increase in the number of publications on the
development of performance measures in scientific literature. A total of 121 measures for various aspects and
dimensions of the performance of public mental health
providers, services, and systems were proposed
[20,22,24,25,27,29,33,36].
In the following section, ten national initiatives that
focus on between-state comparable PI are discussed in
more detail. Some distinctive examples of within-state
PMHC performance measurement initiatives are discussed subsequently.
One of the first, more comprehensive, and most widespread quality indicator systems in the U.S. is the Health
plan/Employer Data Information System (HEDIS).
HEDIS is a set of standardized performance measures
designed to enable purchasers and consumers to reliably
compare the performance of managed care plans. Relatively few measures of mental health care and substance
abuse services were included in the early versions of the
HEDIS. The 2009 version only includes six measures of
the performance of these services [19]. With increasing
popularity of managed care plan models in PMHC, the
HEDIS mental health care performance measures are
widely accepted in private as well as public mental
health care performance measurement projects. The
measures were utilized to assess the relationship of
mental health care quality with general health care quality and mental health care volume in health plans that
included programs funded by state and federal governments (i.e. Medicaid) [48,64].
A set of quality indicators that is more specifically tailored to measuring the quality of mental health services
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 4 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region
Nation/
region
Author/organization (year)
Objective of publication/study
PI/sets/frameworks
Purpose of PI/set/framework
Simpson & Lloyd (1979) [17]
Cohort study relating client
perception of program
performance to outcomes
Client evaluations of drug abuse Assess drug treatment
treatment in relation to follow- effectiveness
up outcomes
USA
Koran & Meinhardt (1984) [18] Assessment of validity of County
Need Index
Social indicators in statewide
mental health planning: lessons
from California
Promote equity in the
distribution of mental health
funds
National Committee for
PI development, assessment of
Quality Assurance (since 1993) usefulness and feasibility, and
[19]
implementation
Health Plan/Employer Data
Information Set (HEDIS)
Help employers to evaluate and
compare performance among
HMOs and other health plans
McLellan et al. (1994) [20]
Exploration of patient and
treatment factors in outcomes
Mental Health Statistics
Improvement Program (1996)
[21]
PI development, review of
quality measurement
performance initiatives
Similarity of outcome predictors
across opiate, cocaine, alcohol
treatments; role of treatment
services
MHSIP Consumer-oriented
Mental health Report Card
Evaluate effectiveness of
substance abuse treatment in
reducing substance use, and
improving social adjustment.
Capture and reflect important
characteristics of mental health
service delivery
Srebnik et al. (1997) [22]
Outcome indicators for
Assess the quality of public
PI development based on
literature review and stakeholder- monitoring the quality of public mental health care by
opinion, assessment of PI validity mental health care
consumers and providers
Lyons et al. (1997) [23]
Determine whether readmissions Predicting readmission to
psychiatric hospital in a
can service as a PI for an
inpatient psychiatric service
managed care environment:
implications for quality
indicators
Provide program managers,
third-party payers, and policy
makers with information
regarding the functioning of
health services
Baker (1998) [24]
PI development and
presentation of method of
quality monitoring
A PI spreadsheet for physicians
in community mental health
centers
Demonstrate progress in
meeting objectives and
implementing strategies for
mental health care to legislators
and stakeholders
Carpinello et al. (1998) [25]
Explore development,
implementation, and early results
of using a comprehensive
performance management
system
Managing the performance of
mental health managed care: an
example from New York State’s
Prepaid Mental Health Plan
Pandiani et al. (1998) [26]
PI development and assessment
of PI sensitivity and usefulness
Rosenheck & Cicchetti (1998)
[27]
PI development and
implementation
Mental health program report
card for public sector programs
Tool in improvement of service
delivery, mental health system
performance, and accountability
Macias et al. (1999) [28]
Assess the worth of mental
health certification as a core
component of state and regional
performance contracting
Description of management
process for financial and clinical
PI
Examine the association
between consumer satisfactionand administrative measures at
an individual and a hospital level
The value of program
certification for performance
contracting
Assess the quality and fidelity of
‘clubhouse’ psychiatric
rehabilitation programs
PI for physicians in community
mental health centers
Report clinical and financial
performance to payers of mental
health services
Provide providers, purchasers
and consumers with
understandable and measurable
information on the quality of
health care
Baker (1999) [29]
Druss et al. (1999) [30]
Department of Health and
Human Services (2000) [31]
Present a comprehensive,
nationwide health promotion
and disease prevention agenda.
Reflect the concerns of multiple
stakeholders and form a
foundation for continuous
quality improvement activities
and information-reporting
products
Using incarceration rates to
Provide program administrators
measure mental health program with standardized information of
performance
program performance in the
area of mental health care
Patient satisfaction and
administrative measures as
indicators of the quality of
mental health care
Healthy People 2010–
Understanding and improving
health
Guiding instrument for
addressing health issues,
reversing unfavorable trends, and
expanding past achievements in
health
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 5 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
Huff (2000) [32]
Assess the association between
measures of post-admission
outpatient utilization and
readmission
Outpatient utilization patterns
and quality outcomes after first
acute ePIode of mental health
hospitalization
Provide state, patient advocates
and service providers with
information to ensure outpatient
quality of care
McCorry et al. (2000) [33]
PI development and adoption of
core set of PI by health plans,
private employers, public payers,
and accrediting associations
The Washington Circle Group
core set of PI for alcohol- and
other drug services for publicand private sector health plans
Promote quality and
accountability in the delivery and
management of AOD abuse
services by public and private
organized systems of care
Vermont’s Mental Health
Performance Indicator Project
Multi-stakeholder Advisory
Group (2000) [34]
Recommendations for PI to be
included in a publicly available
mental health report card
Indicators of mental health
program performance
Development of a data based
culture of learning about the
system of care
National Association of State
Mental Health Program
Directors (2000) [35]
Provide a guide and a
framework for the
implementation of PI in mental
health systems
The NASMHPD framework of
mental health PI
Address the need for a
standardized methodology for
evaluating the impact of services
provide through the public
mental health system
Siegel et al. (2000) [36]
Framework development and
selection of performance
measures
PI of cultural competency in
mental health organizations
Assess the cultural competency
of mental health systems
American college of Mental
Health Administration (2001)
[37]
PI development, reaching
consensus between five national
accreditation organizations on
quality assessment and
measurement
A proposed consensus set of PI
for behavioral health
Advance the partnership
between consumers, purchasers,
providers and others in quality
measurement and improvement
Young et al. (2001) [38]
Estimate the rate of appropriate
treatment, and the effect of
insurance, provider type and
individual characteristics on
receipt of appropriate care
Survey to assess quality of care
for depressive and anxiety
disorders in the US
Evaluate mental health care
quality on a national basis
California Department of
Mental Health (2001) [39]
PI development and identify
areas that require special study
of feasibility of measures
PI for California’s public mental
health system
Provide information needed to
continuously improve the care
provided in California’s public
mental health system
Eisen et al. (2001) [40]
Provide data that could be used Toward a national consumer
Assess quality of behavioral
to develop recommendations for survey: evaluation of the CABHS health from consumer
an improved consumer survey
and MHSIP instruments
perspective
Chinman et al. (2002) [41]
Illustrate the utility of a
continuous evaluation system in
promoting improvements in a
mental health treatment system
The Connecticut Mental Health
Center patient profile project:
application of a service need
index
Defining the characteristics of
the patient population to guide
management decisions in
caseload distribution and service
development
Davis & Lowell (2002a, b)
[42,43]
Demonstrate the value of proper a. Expenditure on, and b. fiscal
proportions of resources
structure of mental health care
systems and its relationship to
suicide rate
Calculate the optimum
distribution of community/state
psychiatric hospital beds, and
cost per capita for mental health
care to minimize suicide rate
Dausey et al. (2002) [44]
Examine the relationship
between preadmission care and
length of inpatient stay, access
to aftercare, and rehospitalization
Assess the quality, continuity,
and intensity of care
Minnesota Department of
Human Services (2002) [45]
Inform counties and providers of PI measures for Adult Rule 79
the implementation of PI
mental health case
management
Report on outcomes from the
adult mental health system to
comply with state’s federal
mental health block grant
application
Hermann et al. (2002) [46]
Assess utility and applicability of
process measures for
schizophrenia care
Assess quality of care for
schizophrenia
Pandiani et al. (2002) [47]
Provide a methodological outline Measuring access to mental
Assess access to publicly funded
for measuring access and
health care: a multi-indicator
systems focusing on both
identify and discuss a set of
approach to program evaluation general and special populations
decision points in the project
Preadmission care as a new
mental health PI
National inventory of measures
of clinical processes proposed
or used in the U.S.
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 6 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
Druss et al. (2002) [48]
Asses the relation between
mental health care quality
measures and measures of
general care quality
HEDIS 2000 mental health care
PI
Provide purchasers a report card
for rating and selecting health
plans
CDC–National Public Health
Performance Standards
Program, (NPHPSP; 2002) [49]
Present instruments for
assessment of local and state
public health systems
Local and State public health
system performance assessment
instruments & Local public
health governance performance
assessment instrument
To improve the practice of
public health by comprehensive
performance measurement tools
keyed to the 10 Essential
Services of Public Health
Beaulieu & Scutchfield (2002)
[50]
Assess the face and content
validity of NPHPSP instrument
Local Public Health System
Performance Assessment
Instrument
Ensure the delivery of public
health services and support a
process of quality improvement
Beaulieu et al. (2003) [51]
Assess the content and criterion
validity of NPHPSP instruments
Local and State Public Health
Measure performance of the
System Performance Assessment local and state public health
instruments
system
Trutko & Barnow (2003) [52]
Explore feasibility of developing
a core set of PI measures for
DHHS programs that focus on
homelessness
Core PI for homeless-serving
programs administered by the
US DHHS
Facilitate documentation and
analysis of the effectiveness of
program interventions
The Urban Institute (2003) [53] Describe lessons learned from PI
development experiment and
provide suggestions for other
communities
Community-wide outcome
indicators for specific services
Balance outcome-reporting
requirements of funders for
accountability and providers for
improvement of services
Greenberg & Rosenheck
(2003) [54]
Examine the association of
continuity of care with factors
(not) under managerial control
Managerial and environmental
factors in the continuity of
mental health care across
institutions
Assess the quality of outpatient
care for persons with severe
mental illness
Owen et al. (2003) [55]
Examine meaningfulness and
validity of PI and automated
data elements
Mental health QUERI initiative:
expert ratings of criteria to
assess performance for major
depressive disorder and
schizophrenia
Provide clinicians, managers,
quality improvement specialists
and researchers in the Veterans
Health Administration with
useful data on clinical practice
guidelines compliance
Siegel et al. (2003) [56]
Benchmarking selected
performance measures
PI of cultural competency in
mental health organizations
Assess organizational progress in
attaining cultural competency
(CC) and to provide specific
steps for implementing facets of
CC.
Solberg et al. (2003) [57]
Understand the process,
outcomes and patient
satisfaction of primary care
patients diagnosed with
depression
Process, outcomes and
satisfaction in primary care for
patients with depression
Identify quality gaps and serve
as a baseline for quality
improvements in health plan
depression care
Center for Mental Health
Services (CMHS), Substance
Abuse and Mental Health
Service Administration
(SAMHSA), DHHS (2003) [58]
Report on 16-state indicator pilot
project focused on assessment,
refinement an pilot testing
comparable mental health
performance indicators
PI adopted from the NASMHPD
Framework of Performance
Indicators reflecting much of
the MHSIP Report Card
Edlund et al. (2003) [59]
Validate the technical qualitysatisfaction relationship and
examine the effects of selection
bias among patients with
depressive and anxiety disorders
Satisfaction measures as a
reflection of technical quality of
mental health care
Report mental health system
performance comparably across
states for national reporting, and
facilitate planning, policy
formulation and decision making
at the state level.
Provide health care plan and
provider quality information to
insurers, providers, and
researchers for improvement of
quality of care for common
mental disorders
Virginia Department of Mental PI implementation and report on Virginia’s performance outcomes
Health, Mental Retardation
outcomes
measurement system (POMS)
and Substance Abuse Services
(2003) [60]
Provide public mental health
authorities with information on
consumer outcomes and
provider performance to contain
costs, improve quality and
provide greater accountability
Blank et al. (2004) [61]
Continuously improve the quality
of services and increase
accountability for taxpayer
dollars
Assess efficiency of a selection of Virginia’s POMS
POMS indicators and develop
recommendations for improving
POMS
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 7 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
Charbonneau et al. (2004) [62] Explore the relationship of
process measures with
subsequent overall
hospitalizations
Guideline-based depression
process measures
Estimate healthcare quality and
quantify its benefits
Stein et al. (2004) [63]
Evaluate the process and quality
of care and examine patient
characteristics that potentially
determine quality
Quality of care for patients with
a broad array of anxiety
disorders
Assess the quality of care
received in primary care settings
for efforts at quality
improvement
Druss et al. (2004) [64]
Assess relation between mental
health care volume and quality
HEDIS 2000 mental health care
PI
Reflect the capacity to treat
specialized conditions and as
proxy for clinician volume
McGuire & Rosenheck (2004)
[65]
Examine the relation between
incarceration history and
baseline psychosocial problems
service utilization, and outcomes
of care
Criminal history as a prognostic
indicator in the treatment of
homeless people with severe
mental illness
Provide clinicians and
administrators with information
on treatment prospects of
former inmates
Leff et al. (2004) [66]
Investigate the relationship
between service fit and mortality
as a step towards understanding
the general relationship between
service quality and outcomes
Service quality as measured by
service fit vs. mortality among
public mental health system
service recipients
Assess and compare programs
and systems, the extent to which
an intervention has been
implemented in program
evaluations, an service need in
program and resource allocation
planning
Valenstein et al. (2004) [67]
Examine providers’ views of
PI drawn from sets maintained
quality monitoring processes and and implemented by various
patient, provider and
national organizations
organizational factors that might
be associated with more positive
views
Mental health recovery: What PI development, and assessment
helps and what hinders? A
of usability and implementation
National Research Project for
the Development of Recovery
Facilitating System
Performance Indicators (2004)
[68]
Recovery oriented system
indicators (ROSI)
Provide mental health care
providers with feedback about
their performance
Facilitate mental health recovery,
and bridge the gap between the
principles of recovery and selfhelp and application of these
principles in everyday work of
staff and service systems
Hermann et al. (2004) [69]
PI selection and assessment of PI Core set of PI for mental and
meaningfulness and feasibility
substance-related care
Ensure that systems and
providers focus on clinically
important processes with known
variations in quality of care
Rost et al. (2005) [70]
Explore relation between
administrative PI and
absenteeism
Relationship of depression
treatment PI to employee
absenteeism
Provide employers with evidence
of the value of the healthcare
they purchase.
Mental Health Statistics
Improvement Program (2005)
[71]
PI development and present
toolkit for methodology,
implementation and uses
MHSIP Quality Report (MQR)
Reflect key concerns in mental
health systems or organizations
performance
Washington State Department PI implementation and report on State-wide publicly funded
of Social and Health Services– PI information
mental health PI
Mental Health Division (2005)
[72]
Help system managers and
payers understand trends in
services delivery systems and
change across time
Provide a conceptual framework
for performance measurement
and improvement
New York Office of Mental
Health (2005) [73]
PI development and
implementation
Garnick et al. (2006) [74]
Examine different types of PI,
how they fit within the
continuum of care, and the
types of data that can be used
to arrive at these measures
Hermann et al. (2006) [75]
Develop statistical benchmarks
for quality measures of mental
health and substance-related
care
2005-2009 Statewide
comprehensive plan for mental
health services
PI for alcohol and other drug
services
Evaluate how well practitioners’
actions conform to guidelines,
review criteria or standards to
improve access, and quality of
treatment
Selected measures from core set Assess quality of care for
of PI for mental and substance- Medicaid beneficiaries to inform
related care
quality improvement
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 8 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
Mental health recovery: What
helps and what hinders? A
National Research Project for
the Development of Recovery
Facilitating System
Performance Indicators (2006)
[76]
Refinement of self-report survey
and administrative profile PI
based on feedback from
stakeholders
Recovery oriented system
indicators (ROSI)
Measure critical elements and
processes of recovery facilitating
mental health programs and
delivery systems
Busch et al. (2007) [77,78]
PI development informed by
Quality of care for bipolar I
APA guidelines for the treatment disorder
of bipolar disorder
Assess quality of medication and
psychotherapy treatment
Center for Quality Assessment PI development using an
and Improvement in Mental
adaptation of the RAND
Health (2007) [79]
appropriateness method, and
assess reliability
Standards for bipolar excellence
(STABLE) PI
Advance the quality of care for
by supporting improved
recognition and promoting
evidence-based management
Present the revised instruments
Version 2.0 of the Local and
for assessment of local and state State public health system
public health systems
performance assessment
instruments and Local public
health governance performance
assessment instrument
Provide users with information
to identify strengths and
weaknesses of the public health
system to determine
opportunities for improvement
Virginia Department of Mental PI implementation and report on 2008 mental health block grant
Health, Mental Retardation
achieved goals
implementation report PI
and Substance Abuse services
(2008) [81]
Monitor the implementation and
transformation of a recoveryoriented system
Canadian Institute for Health
Information (CIHI; 2001) [82]
PI development, assessment of
feasibility & usefulness
Maintain and improve Canada’s
health system
Federal/Provincial/Territorial
Advisory Network on Mental
Health (2001) [83]
PI development
CDC–National Public Health
Performance Standards
Program (NPHPSP; 2007) [80]
Canada
The Roadmap Initiative–Mental
health and Addiction Services
Roadmap Project. Phase 1
Indicators
PI for Mental health Services
and Supports–A Resource Kit
Facilitate ongoing accountability
and evaluation of mental health
services and supports
Ontario Ministry of Health and PI development and mechanisms Mental Health Accountability
Long-term Care (2003) [84]
for implementation
Framework
Increasing health system
accountability to ensure services
are as effective and efficient as
possible
Addington et al. (2005) [85]
PI selection based on literature
PI for early psychosis treatment
review and consensus procedure services
Evaluate quality, and assist
providers in improving quality of
health care
NMHWG Information Strategy
committee Performance
Indicator drafting group
(2005) [86]
Development conceptual
framework of performance & PI
Key PI for Australian public
mental health services
Improve public sector mental
health service quality
Meehan et al. (2007) [87]
Assessment of feasibility &
usefulness of benchmarking
mental health services
Benchmarking public sector
Input, process, output and
outcome PI for inpatient mental mental health service
organizations
health services
Jenkins (1990) [88]
PI development
A system of outcome PI for
mental health care.
Ensure that clinicians district
health authorities and directors
of public health can monitor and
evaluate mental health care
National Health Service
(1999a, b) [89,90]
Framework and PI development
A National Service Framework
for Mental Health; A New
Approach To Social Services
Performance
Help drive up quality and
remove the wide and
unacceptable variations in
provision.
Shipley et al. (2000) [91]
PI development and validity
assessment
Patient satisfaction: a valid index Provide PMHC planners with an
of quality of care in a
independent yardstick for mental
psychiatric service
health services and determine
population mental health
Australia
United
Kingdom
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 9 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
UK (cont.)
Audit Commission (2001) [92]
PI development and application
Library of Local Authority PI
Accountability and
benchmarking of local
authorities by national
government
Jones (2001) [93]
Review of pre-existing PI
Hospital care pathways for
patients with schizophrenia
Clarify terms and concepts in
schizophrenia care process
Shield et al. (2003) [94]
PI development
PI for primary care mental
health services
Facilitating quality improvement
and show variations in care
Commission for Health
Improvement (2003) [95]
PI development and
implementation
Mental health trust balanced
scorecard indicators
Improve care provided by
mental health trusts and
promote transparency in PMHC
Department of Health (2004)
[96]
PI development
National Standards, Local
Action–health and social care
standards and planning
framework
Set out the framework for all
NHS organizations and social
service authorities to use in
planning over the next financial
three years
NHS Health Scotland (2007)
[11]
PI development based on
current data, policy, evidence,
and expert-opinion
Core set of national, sustainable
mental health indicators for
adults in Scotland
Determine whether mental
health is improving and track
progress
Care Services Improvement
Partnership (2007) [97]
PI development
Outcome indicators framework
for mental health day services
Help commissioners and
providers to monitor, evaluate,
and measure the effectiveness of
day services adults with mental
health problems
Healthcare Commission (2007) PI development
[98]
The Better Metrics Project
Department of Communities
and Local Government (2007)
[99]
PI development and application
The National Indicator Set (NIS)
in Comprehensive Area
Assessment (CAA)
Provide a common set of
requirements to ensure safe and
acceptable quality health
provision, and provide a
framework for continuous
improvement
Performance management of
local government by central
government
Association of Public Health
Observatories (2007) [100]
Present data on the factors
which give rise to poor mental
health, mental health status of
populations, provision of
interventions, service user
experience and traditional
outcomes
Indications of public health in
the English Regions: Mental
Health
Provide a resource for regional
public health directors, PCT and
CSIP directors in making
decisions, holding to account
those responsible for the delivery
and improving mental health of
the population.
Wilkinson et al. (2008) [101]
Report on the construction of a
set of indicators for mental
health and the publication of a
report for England’s Chief
Medical Officer
Indications of public health in
the English Regions: Mental
Health
Initiating public health action to
improve health at a regional
level in England
London Health Observatory
(2008) [102]
PI development and
implementation
Mental health and wellbeing
scorecard
Support primary care trusts in
monitoring delivery of national
health improvement objectives,
and improvement of mental
health and wellbeing
Care Services Improvement
Partnership (2009) [103]
Broaden initial framework to
Outcome indicators framework
provide for application in mental for mental health services
health services more widely
Indications of public health in
PI development, application of
pre-existing PI, operationalization the English regions: Drug Use
of issues, targets and
recommendations in policies
Ensure the effectiveness and
impact of redesigned and
refocused services
Present information on the
relative positions of regions on
major health policy areas,
highlighting differences, to
stimulate practitioners to take
action to improve health
PI development, assessment of
feasibility
Reflect the impact that disability
due to mental disorders has on
population health
Association of Public Health
Observatories (2009) [104]
Spain
Gispert et al. (1998) [105]
Mental health expectancy: a
global indicator of population
mental health
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 10 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
Germany
Kunze & Priebe (1998) [106]
Development of quality
assessment tool
Assessing the quality of
psychiatric hospital care: a
German approach.
Assessment of quality of care
after political reforms to help
promote quality.
Bramesfeld et al. (2007) [107]
Implementation of quality
assessment tool
Evaluating inpatient and
Evaluate performance of mental
outpatient care in Germany with health care services to improve
the WHO responsiveness
responsiveness
concept
Roeg et al. (2005) [108]
Development of disease-specific
concept of quality
Conceptual framework of
quality for assertive outreach
programs for severely impaired
substance abuses
Improve understanding of the
relationship between specific
program features and
effectiveness
Nabitz et al. (2005) [109]
Development of disease-specific
concept of quality
A quality framework for
addiction treatment programs
Clarify the concept of quality for
addiction treatment programs
Nieuwenhuijsen et al. (2005)
[110]
PI development & validity
assessment
PI for rehabilitation of workers
with mental health problem
Assessment of occupational
health care to improve the
quality of care
Wierdsma et al. (2006) [111]
Application & risk adjustment of
PI
Utilization indicators for quality Assess criteria for involuntary
of involuntary admission mental admission to inpatient mental
health care
health care
Steering Committee–
Transparency Mental
Healthcare (2007) [112]
Improvement of existing PI and
PI development
Basic Set of PI for Mental Health Promoting transparency and
Care and Addiction Care
publication of quality
services
information by mental health
and addiction service providers
Bollini et al. (2008) [113]
PI development,
operationalization of (PORT)
guidelines
Indicators of conformance with
guidelines of schizophrenia
treatment in mental health
services
Monitor the conformance of care
with recommend practices and
identify areas in need of
improvement
Lund & Fisher (2003) [114]
PI development and assessment
of PI usefulness
Community/hospital indicators
in South African public sector
mental health services
Assess the implementation of
policy objectives over time
Chong et al. (2006) [115]
Application of pre-existing PI and Assessment of the quality of
operationalization of guidelines
care for patients with firstepisode psychosis
Assess adherence to guidelines
in an early psychosis intervention
program
National Research and
Development Centre for
Welfare and Health (STAKES)–
EC Health Monitoring
Programme (2002) [8]
PI development and assessment
of feasibility and usability
A set of mental health
indicators for European Union
Contribute to the establishment
of a community monitoring
system
Organisation for Economic
Cooperation and
Development (OECD; 2004)
[10]
World Health Organization
(2005) [116]
PI selection and assessment of
utility
Indicators for the quality of
Improve organization and
mental health care at the health management of care to allow
system level in OECD countries countries to spend their health
care dollars more wisely
The
Netherlands
The
Netherlands
(cont.)
Italy
South Africa
Singapore
International
Saxena et al. (2006) [117]
PI development,
Assessment Instrument for
operationalization of
Mental Health Systems (WHOrecommendations, assessment of AIMS) version 2.2
usefulness
Collect essential information on
the mental health system of a
country or region to improve
mental health systems
Describe and compare 4 existing Healthy People 2010; Mental
high-income country public
Health Report Card (MHSIP);
mental health indicator schemes Commission for Health
Improvement Indicators (CHI);
European community Health
Indicators (ECHI)
Contribute to the development
of relevant policies and plans
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
Page 11 of 26
Table 1 Publications and PMHC quality measurement initiatives per nation/region (Continued)
Hermann et al. (2006) [118]
Report on methods employed to Indicators for the quality of
Facilitate improvement within
reach consensus on the OECD
mental health care at the health organizations, provide oversight
mental health care indicators
system level in OECD countries of quality by public agencies and
private payers, and provide
insight into what levels of
performance are feasible
OECD (2008) [119]
Provide overview of present
mental health care information
systems to assess feasibility of
performance indicators
was developed by the Mental Health Statistics Improvement Program (MHSIP). The program aims to assess
general performance, support management functions,
and maximize responsiveness to service needs of mental
health services and published the Consumer-Oriented
Report Card including 24 indicators of Access, Appropriateness, Outcomes, and Prevention [21]. Eisen et al.
evaluated the consumer surveys from both the HEDIS
(the Consumer Assessment of Behavioral Health Survey;
CABHS) and the MHSIP Consumer Survey. The results
of this study were reviewed by several national stakeholder organizations to make recommendations for
developing a survey combining the best features of each.
This resulted in the development of the Experience of
Care and Health Outcomes (ECHO) survey [40]. Building on the experiences with the Consumer-oriented
Report Card and the advances in quality measurement
and health information technology the MHSIP proposed
a set of 44 PI in their Quality Report [71].
The nationwide health promotion and disease prevention agenda for the first decade of the 20th century was
aimed at increasing quality and years of healthy life and
eliminate health disparities [31]. This agenda contained
objectives and measures to improve health organized
into 28 focus areas, including Mental Health and Mental
disorders and Substance Abuse.
The national association representing state mental
health commissioners/directors and their agencies
(NASMHPD) provided a framework for the implementation of standardized performance measures in mental
health systems [35]. A workgroup had reviewed national
indicators and instruments, surveyed state mental health
authorities, and conducted a feasibility study in five
states. Using the MHSIP-domains as a starting point,
the resulting framework includes 32 PI for state mental
health systems.
The American College of Mental Health Administration (ACMHA) recognized the need for a national dialog, a shared vision in the field of mental health and
substance abuse services, and an agreement on a core
set of indicators and formed workgroup that collaborated with national accrediting organizations to propose
Indicators for the quality of
Monitor changes on
mental health care at the health effectiveness and safety patients
system level in OECD countries subsequent to reform of mental
health services and facilitate
benchmarking
35 indicator definitions. These definitions were organized in three domains (i.e. access, process and outcome) applicable to quality measurement for either
comparison between mental health services or internal
quality improvement activities [37].
In response to the interest expressed by a number of
states to develop a measure related to recovery that
could be used to assess the performance of state and
local mental health systems and providers, a national
research project for the development of recovery facilitating system performance indicators was carried out. The
Phase One Report on the factors that facilitate or hinder
recovery from psychiatric disabilities set a conceptual
framework [120]. This provided the base for a core set
of system-level indicators that measure structures and
processes of a recovery-facilitating environment, and
generate comparable data across state and local mental
health systems [68]. The second phase of the project
included the development of the Recovery Oriented System Indicators (ROSI) measures based on the findings
of phase one, a prototype test and review of self-report
indicators in seven states, and a survey to receive feedback on administrative indicators with nine states. The
ROSI consists of a 42-item consumer self-report survey,
and a 23-item administrative data profile that gather
data on experiences and practices that enhance or hinder recovery [76].
Parallel to the efforts to establish standardized measures of mental health and substance abuse care performance, the National Public Health Performance
Standards Program (NPHPSP) developed three assessment instruments to assist state and local partners in
assessing and improving their public health system, and
guide state and local jurisdictions in evaluating their
current performance against a set of optimal standards.
Each of the three NPHPSP instruments is based on a
framework of ten Essential Public Health Services which
represent the spectrum of public health activities that
should be provided in any jurisdiction. The NPHPSP is
not specifically focused on the public mental health
care, but it is one of the first national programs that
aim to measure the performance of the overall public
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
health system that includes public, private, and voluntary entities that contribute to public health activities
within a given area [49]. Beaulieu and Schutchfield [50]
assessed the face and content validity of the instrument
for local public health systems and found that the standards were highly valid measures of local public health
system performance. Beaulieu et al. evaluated the content and criterion validity of the local instrument, and
the content validity of the state performance assessment
instrument. The local and state performance instruments were found to be content valid measures of
(resp.) local and state system performance. The criterion
validity of a summary performance score on the local
instrument could be established, but was not upheld for
performance judgments on individual Essential Services
[51]. After their publications in 2002, NPHPSP’s public
health performance assessment instruments had been
applied in 30 states. The NPHPSP consorted with seven
national organizations, consulted with experts in the
field of public health, and conducted field tests to
inform revisions of these instruments [80].
One of the first national initiatives to develop performance measures that include socioeconomic and psychosocial care focused on the development of core
performance indicators for homeless-serving programs
administered by the DHHS [52]. Based on interviews
with program officials and review of existing documentation and information systems, 17 indicators that could
be used by these programs were suggested, despite large
differences between programs.
A pilot test of PI of access, appropriateness, outcome,
and program management on a statewide basis, part of
NASMHPD’s Sixteen state study on mental health performance measures, demonstrated the potential for
developing standardized measures across states and confirmed that the realization of the full potential will
depend on enhancements of the data and performance
measurement infrastructure. Furthermore, it was
demonstrated that states can use their current performance measurement system to report comparable information [58].
An online database providing more than 300 process
measures for assessment and improvement of mental
health and substance abuse care was set up by the Center for Quality Assessment and Improvement in Mental
Health (CQAIMH). Each measure is accompanied by a
clinical rationale, numerator and denominator specifications, information on data sources, domain of quality,
evidence basis, and developer contact information [121].
This national inventory of mental health quality measures includes many of the measures developed by the
national initiatives discussed above as well as many process measures developed by individual states. It is one of
the most comprehensive and broadly supported
Page 12 of 26
performance assessment and -improvement tools in the
field of (public) mental health care to date.
In addition to the quality measurements requested by
national organizations and federal agencies, some states
have developed quality assessment instruments or measures tailored specifically to their data sources and mental health care system. For example, the state of
Vermont’s federally funded Mental health Performance
Indicator Project asked members of local stakeholders in
the field of mental health (i.e. providers, purchasers, and
government agencies) to recommend specific PI for
inclusion in a publicly available mental health report
card of program performance. This multi-stakeholder
advisory group proposed indicators structured in three
domains, i.e. ‘treatment outcomes’, ‘access to care’, and
‘practice patterns’ [34].
Another example of state-specific public mental health
performance measurement was found in the state of
California. A Quality Improvement Committee established indicators of access and quality to provide the
information needed to continuously improve the care
provided in California’s public mental health system.
The committee adopted the performance measurement
terminology used by the ACMHA and judged possible
indicators against a number of criteria (such as availability of data in the California mental health system). A
total of 15 indicators were formulated in four domains:
structure, access, process, and outcomes. So-called special studies were designed to assess gaps in data-availability and determine benchmarks of performance [39].
Other states and localities took similar initiatives
which often served a dual purpose. On the one hand,
the indicators provide accountability information for
federally funded programs (e.g. Minnesota, Virginia)
[45,81], and on the other, the indicators provide local
providers and service delivery systems with information
to improve state mental health care quality (e.g. Virginia; Maryland) [53,60]. Successful implementation of
such state-initiated quality assessment systems is not
guaranteed. Blank et al. [61] reported on the pilot implementation of the Performance and Outcomes Measurement System (POMS) by the state of Virginia. The pilot
was perceived to be costly, time-consuming and burdensome by the majority of the representatives of participating community health centers and state hospitals.
Despite large investments and efforts in redesigning
POMS to be more efficient and responsive, the POMSproject was cancelled due to state budget-cuts in 2002.
Two years later, Virginia participated in a pilot to
demonstrate the use of the ROSI survey to measure a
set of mental health system PI [122].
Canada
Canada’s health care system is publicly funded and
administered on a provincial or territorial basis, within
Lauriks et al. BMC Public Health 2012, 12:214
http://www.biomedcentral.com/1471-2458/12/214
guidelines set by the federal government. The provincial
and territorial governments have primary jurisdiction in
planning and delivery of mental health services. The federal government collaborates with the provinces and territories to develop responsive, coordinated and efficient
mental health service systems [2]. This collaboration is
reflected in four publications on PMHC performance
measurement discussed below.
The Canadian Institute for Health Information (CIHI)
launched the Roadmap Initiative to build a comprehensive, national health information system and infrastructure. The Prototype indicator Report for Mental health
and Addiction services was published as part of the
Roadmap Initiative. The report contained indicators
relevant to acute-, and community-based services whose
costs were entirely or partially covered by a national,
territorial or provincial health plan [83].
Adopting the indicator domains from the CIHI framework, the Canadian Federal/Provincial/Territorial Advisory Network on Mental Health (ANMH), provided a
resource kit of PI to facilitate accountability and evaluation of mental health services and supports. Based on
literature review, and expert- and stakeholder survey,
the ANMH presented 56 indicators for eight domains of
performance, i.e. acceptability, accessibility, appropriateness, competence, continuity, effectiveness, efficiency,
and safety [83].
Utilizing the indicators and domains from the ANMH
and CIHI, the Ontario Ministry of Health and Longterm Care (MOHLTC) designed a mental health
accountability framework that addressed the need for a
multi-dimensional, system-wide framework for the public health care system, an operating manual for mental
health and addiction programs, and various hospitalfocused accountability tools [84].
Focusing on early psychosis treatment services,
Addington et al. [85] reviewed literature and used a
structured consensus-building technique to identify a set
of service-level performance measures. They found 73
relevant performance measures in literature and reduced
the set to 24 measures that were rated as essential by
stakeholders. These disorder-specific measures cover the
domains of performance originally proposed by the
CIHI and utilized by the ANMH and the MOHLTC.
Australia
Medicare is Australia’s universal health care system
introduced in 1984. It is financed through progressive
income tax and an income-related Medicare levy. Medicare provides access to free treatment in a public hospital, and free or subsidized treatment by medical
practitioners including general practitioners and specialists. Mental health care services are primarily funded by
government sources [123]. One report and one scientific
Page 13 of 26
publication on PI for Australian PMHC system and services were found.
The Australian National Mental Health Working
Group (NMHWG) proposed indicators to facilitate collaborative benchmarking between public sector mental
health service organizations based on the Canadian
CIHI-model. Thirteen so-called Phase 1 indicators were
found suitable for immediate introduction based on the
available data collected by all states and territories [86].
Following major reform and ongoing deinstitutionalization of the mental health care system, Meehan et al.
[87] reported on attempts to benchmark inpatient psychiatric services. They applied 25 indicators to assess
performance of high secure services, rehabilitation services, and medium secure services in three rounds of
benchmarking. The primary conclusion of the study was
that it is possible and useful to collect and evaluate performance data for mental health services. However,
information related to case mix as well as service characteristics should be included to explain the differences
in service performance.
United Kingdom
Public mental health care in the UK is governed by the
Department of Health (DH) and provided by the
National Health Service (NHS) and social services.
These services are paid for from taxation. The NHS is
structured differently in various countries of the UK. In
England, 28 strategic health authorities are responsible
for the healthcare in their region. Health services are
provided by ‘trusts’ that are directly accountable to the
strategic health authorities. Eighteen publications concerning the quality of public mental healthcare in the
UK were found. All but one focus on the PMHC in
England, and only five studies are published in scientific
peer-reviewed journals. In this section we highlight the
large national initiatives.
A National Service Framework (NSF) for Mental
Health set seven standards in five areas of PMHC (i.e.
mental health promotion, primary care and access to
services, effective services, caring about carers, and preventing suicide) [90]. The progress on implementation
of the NSF for Mental Health was measured in several
indicators per standard to assess the realization of care
structures, processes, and their outcomes set out by the
NSF [124].
In response to the governments’ new agenda for social
services, the DH issued a consultation document on a
new approach to social services performance [89]. This
approa…

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER