Application: Role and Value of Evaluation
The United Nations, the American Red Cross, and other international disaster response organizations worked together to care for the trauma survivors after a series of four devastating hurricanes smashed into Haiti one after another, before communities had time to regroup from the previous ones. Then the 2010 massive earthquake struck the island. The poverty in Haiti complicated responder effectiveness and may have compounded survivor trauma because residents had few resources. While back to back disasters may make it difficult to evaluate crisis management plans, they point to the importance of learning what worked and what went wrong.
Once a disaster has occurred, the evaluation process includes a review of what was effective in the response as well as what was not effective. Learning and improved insights from a disaster can require making strategic changes in an organization or community. Last week you encountered new procedures for disaster transportation recovery, such as design-build, that can change the manner in which communities recover from disasters. Organizations can take a lesson from this change in thinking and creativity and look at innovative practices for strategic planning and recovery. Therefore, evaluation is a key element in crisis management planning and recovery.
To prepare for this assignment:
Review
Chapter 17
in your course text, Crisis Intervention Strategies, focusing on systems overviews and the Principles of a Crisis Intervention Ecosystem. Consider the value of ongoing plan evaluation.
Review the Appendix and Chapters 5 and 9 in your course text, Crisis Management in the New Strategy Landscape, focusing on organizational learning and evaluation of crisis management plans.
Review the article, “Program Evaluation: The Accountability Bridge Model for Counselors.” Consider how counselors can use program evaluation to enhance accountability to stakeholders.
Review recent crises and/or disasters online and think about what can be learned about crisis management from them.
The assignment: (2 page paper APA Format)
Provide an analysis of the role and value of evaluation as part of a crisis management plan. Provide specific examples to illustrate your arguments.
Chapter 17
Crisis Intervention Strategies,
https://bookshelf.vitalsource.com/books/9781305888081/pageid/600
Chapter 5
Crisis Management in the New Strategy Landscape
https://bookshelf.vitalsource.com/books/9781483315461/epubcfi
/6/24[;vnd.vst.idref=ch05]!/4/2@0:0
Chapter 9
https://bookshelf.vitalsource.com/books/9781483315461/epubcfi/6/32[;vnd.vst.idref=ch09]!/4/2@0:0
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85162
Assessment & Diagnosis
© 2007 by the American Counseling Association. All rights reserved.
Program evaluation in counseling has been a consistent topic
of discourse in the profession over the past 20 years (Gysbers,
Hughey, Starr, & Lapan, 1992; Hadley & Mitchell, 1995; Loesch,
2001; Wheeler & Loesch, 1981). Considered an applied research
discipline, program evaluation refers to a systematic process of
collecting and analyzing information about the efficiency, the ef-
fectiveness, and the impact of programs and services (Boulmetis &
Dutwin, 2000). The field of program evaluation has grown rapidly
since the 1950s as public and private sector organizations have
sought quality, efficiency, and equity in the delivery of services
(Stufflebeam, 2000b). Today, professional program evaluators are
recognized as highly skilled specialists with advanced training in
statistics, research methodology, and evaluation procedures (Hosie,
1994). Although program evaluation has developed as a distinct
academic and professional discipline, human services professionals
have frequently adopted program evaluation principles in order to
conduct micro-evaluations of local services. From this perspective,
program evaluation can be considered as a type of action research
geared toward monitoring and improving a particular program or
service. Because micro-evaluations are conducted on a smaller
scale, they may be planned and implemented by practitioners.
Therefore, for the purposes of this article, we consider counseling
program evaluation to be the ongoing use of evaluation principles
by counselors to assess and improve the effectiveness and impact
of their programs and services.
Challenges to Counseling Program Evaluation
Counseling program evaluation has not always been conceptual-
ized from the perspective of practicing counselors. For instance,
Benkofski and Heppner (1999) presented guidelines for counsel-
ing program evaluation that emphasized the use of independent
evaluators rather than counseling practitioners. Furthermore,
program evaluation literature has often emphasized evaluation
models and principles that were developed for use in large-scale
organizational evaluations by professional program evaluators
(e.g., Kellaghan & Madaus, 2000; Kettner, Moroney, & Martin,
1999). Such models and practices are not easily implemented by
counseling practitioners and may have contributed to the hesi-
tance of counselors to use program evaluation methods. Loesch
(2001) argued that the lack of counselor-specific evaluation
models has substantially contributed to the dichotomy between
research and practice in counseling. Therefore, new paradigms
of counseling program evaluation are needed to increase the
frequency of practitioner-implemented evaluations.
Much of the literature related to counseling program
evaluation has cited the lack of both counselors’ ability to
systematically evaluate counseling services and of their
interest in doing so (e.g., Fairchild, 1993; Whiston, 1996).
Many reasons have been suggested for counselors’ failure to
conduct evaluations. An important reason is that conducting
an evaluation requires some degree of expertise in research
methods, particularly in formulating research questions, col-
lecting relevant data, and selecting appropriate analyses. Yet
counselors typically receive little training to prepare them for
demonstrating outcomes (Whiston, 1996) and evaluating their
services (Hosie, 1994). Consequently, counselor education
programs have been criticized for failing to provide appropri-
ate evaluation and research training to new counselors (Bor-
ders, 2002; Heppner, Kivlighan, & Wampold, 1999; Sexton,
1999; Sexton, Whiston, Bleuer, & Walz, 1997). Counselors
may, therefore, refrain from program evaluation because of
Randall L. Astramovich, Department of Counselor Education, University of Nevada, Las Vegas; J. Kelly Coker, Harbin and As-
sociates Psychotherapy, Fayetteville, North Carolina. J. Kelly Coker is now at the Department of Counselor Education, Capella
University. Correspondence concerning this article should be addressed to Randall L. Astramovich, Department of Counselor
Education, University of Nevada, Las Vegas, 4505 Maryland Parkway, Box 453066, Las Vegas, NV 89154-3066 (e-mail: Randy.
Astramovich@unlv.edu).
Program Evaluation: The Accountability
Bridge Model for Counselors
Randall L. Astramovich and J. Kelly Coker
The accountability and reform movements in education and the human services professions have pressured coun-
selors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale
evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them
in planning and conducting counseling program evaluations. The authors present the Accountability Bridge Counseling
Program Evaluation Model and discuss its use in evaluating counseling services and programs
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 163
The Accountability Bridge Model for Counselors
a lack of confidence in their ability to effectively collect and
analyze data and apply findings to their professional practice
(Isaacs, 2003). However, for those counselors with the req-
uisite skills to conduct evaluations, their hesitance may be
related to the fear of finding that their services are ineffective
(Lusky & Hayes, 2001; Wheeler & Loesch, 1981).
Despite calls for counselors and counseling programs to em-
brace research and evaluation as an integral part of the provision of
counseling services (e.g., Borders & Drury, 1992; Fairchild, 1994;
Whiston, 1996), there is virtually no information that documents
counselors’ interest in and use of counseling program evaluation.
Although counselors may place minimal value on research and
evaluation activities (Loesch, 2001), strong sociopolitical forces,
including the emphasis on managed care in mental health and
the school reform movement in public education, often require
today’s counselors to use evaluation methods to demonstrate the
effectiveness and impact of their counseling services.
Program Evaluation and Accountability
Distinguishing between program evaluation and accountability
is essential because many professionals use the terms inter-
changeably and, occasionally, as categories of each other. For
instance, Isaacs (2003) viewed program evaluation as a type of
accountability that focuses primarily on program effectiveness
and improvement. However, from our perspective, counseling
program evaluation precedes accountability. As defined by
Loesch (2001), counseling program evaluations help practi-
tioners “maximize the efficiency and effectiveness of service
delivery through careful and systematic examination of program
components, methodologies, and outcomes” (p. 513). Counsel-
ing program evaluations, thus, have inherent value in helping
practitioners plan, implement, and refine counseling practice
regardless of the need to demonstrate accountability. However,
when called on to provide evidence of program effectiveness
and impact, counselors can effectively draw on information
gathered from their own program evaluations.
We, thus, conceptualize counseling accountability as provid-
ing specific information to stakeholders and other supervising
authorities about the effectiveness and efficiency of counseling
services (Studer & Sommers, 2000). In our view, demonstrat-
ing accountability forms a bridge between counseling practice
and the broader context of the service impact on stakeholders.
However, accountability should not be the sole motivation for
counseling program evaluation. As emphasized by Loesch
(2001), counseling program evaluations should be undertaken
to improve counseling services rather than merely to provide a
justification for existing programming.
The Need for New Models of Counseling
Program Evaluation
We believe that a significant contributor to counselors’ dis-
interest in evaluation involves the lack of practical program
evaluation models available to them for this purpose. Fur-
thermore, confusion about the differences between program
evaluation and accountability appear to deter counselors from
engaging in ongoing program evaluations (Loesch, 2001).
Therefore, the development of new, counselor-specific models
that clearly conceptualize program evaluation and account-
ability may provide the necessary impetus to establish program
evaluation as a standard of practice in counseling.
Recent examples of counselor-focused evaluation ap-
proaches include Lusky and Hayes’s (2001) consultation
model of counseling program evaluation and Lapan’s (2001)
framework for planning and evaluating school counseling
programs. Gysbers and Henderson (2000) also discussed the
role of evaluation in school counseling programs and offered
practical strategies and tools that counselors could imple-
ment. These approaches have helped maintain a focus on the
importance of counseling program evaluation.
The purpose of this article was to build on the emerg-
ing counselor-focused literature on program evaluation by
providing counselors with a practical model for developing
and implementing evaluation-based counseling services. As
Whiston (1996) emphasized, counseling practice and research
form a continuum rather than being mutually exclusive activi-
ties. Although some counselors may identify more strongly
with research and others more strongly with practice, both
perspectives provide valuable feedback about the impact of
counseling on clients served. Indeed, evaluation and feedback
are integral parts of the counseling process, and most coun-
selors will identify with the idea of refining their practice by
using feedback from numerous sources as a basis.
This article is geared both to practitioners who may have
had little prior training in or experience with counseling
program evaluations and to counselor educators interested in
training students in counseling program evaluation methods.
We begin by discussing accountability in counseling and the
uses of counseling program evaluation. Next, we present
the Accountability Bridge Counseling Program Evaluation
Model and discuss the steps involved in its implementation.
Finally, we discuss implications and make recommendations
for training counselors in evaluation skills.
Accountability in Counseling
Accountability has become a catchword in today’s sociopoliti-
cal climate. Since the 1960s, local, state, and federal govern-
ment spending has been more closely scrutinized and the
effectiveness of social programs and initiatives more carefully
questioned (Houser, 1998; Kirst, 2000). As professionals in
the social services field, counselors have not been shielded
from the demands to demonstrate successful and cost-effective
outcomes, nor have counseling programs. Despite increas-
ing pressure to document effectiveness, some counselors
maintain that counseling programs are generally immeasur-
able (Loesch, 2001). However, given the rising demands for
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85164
Astramovich & Coker
accountability in education and social programs, such an
attitude is undoubtedly naïve. In fact, funding of educational
programs and social services often hinges on the ability to
demonstrate successful outcomes to stakeholders. Because
counselors often rely on third-party and government funding,
the future of the counseling profession may indeed rest on the
ability of practitioners to answer the calls for documentation
of effectiveness (Houser, 1998).
School Counseling Accountability
Today’s school counselors face increased demands to demon-
strate program effectiveness (Adelman, 2002; Borders, 2002;
Herr, 2002; House & Hayes, 2002; Lusky & Hayes, 2001).
Primarily rooted in the school reform movement, demonstrat-
ing accountability is becoming a standard practice among
school counselors (Dahir & Stone, 2003; Fairchild & Seeley,
1995; Hughes & James, 2001; Myrick, 2003; Otwell & Mullis,
1997; Vacc & Rhyne-Winkler, 1993). Standards-based educa-
tion reforms, including the No Child Left Behind (NCLB)
Act of 2001, have fueled pressures on local school systems
to demonstrate effective educational practices (Albrecht &
Joles, 2003; Finn, 2002; Gandal & Vranek, 2001). The NCLB
Act of 2001 emphasizes student testing and teacher effective-
ness; however, school counselors have also recognized that in
the current educational environment, actively evaluating the
effectiveness of their school counseling programs is crucial.
Although the pressures for accountability have seemingly
increased in recent years, Lapan (2001) noted that school
counselors have developed results-based systems and used
student outcome data for many years. Furthermore, school
counselors have historically been connected with school re-
form, and their roles have often been shaped by educational
legislation (Herr, 2002).
Although accountability demands are numerous, school
counselors may fail to evaluate their programs because of time
constraints, elusiveness of measuring school counseling out-
comes, lack of training in research and evaluation methods, and
the fear that evaluation results may discredit school counseling
programs (Schmidt, 1995). Because of these factors, when
school counselors attempted to provide accountability, they may
have relied on simple tallies of services and programs offered to
students. However, as discussed by Fairchild and Seeley (1995),
merely documenting the frequency of school counseling services
no longer meets the criteria for demonstrating program effective-
ness. Although data about service provision may be important,
school counselors must engage in ongoing evaluations of their
counseling programs in order to assess the outcomes and the
impact of their services.
Trevisan (2000) emphasized that school counseling pro-
gram evaluation may help the school counseling profession
by providing accountability data to stakeholders, generating
feedback about program effectiveness and program needs, and
clarifying the roles and functions of school counselors. As the
profession of school counseling evolves, increasing emphasis
on leadership and advocacy (Erford, House, & Martin, 2003;
House & Sears, 2002) and on comprehensive school coun-
seling programs (American School Counselor Association
[ASCA], 2003; Sink & MacDonald, 1998; Trevisan, 2002b)
will coincide with ongoing research and program evaluation
efforts (Paisley & Borders, 1995; Whiston, 2002; Whiston
& Sexton, 1998). ASCA’s (2003) revised national standards
for school counseling reflect the importance of school coun-
seling accountability and provide direction for practicing
school counselors in the evaluation of their comprehensive
school counseling programs (Isaacs, 2003). Considering the
accountability and outcomes-focused initiatives in today’s
education environment, school counselors need skills and
tools for systematically evaluating the impact of the services
they provide (Trevisan, 2001).
Mental Health Counseling Accountability
Like professional school counselors, today’s mental health
counselors have experienced significant pressures to dem-
onstrate the effectiveness and the efficiency of their counsel-
ing services. To secure managed care contracts and receive
third-party reimbursements, mental health counselors are
increasingly required to keep detailed records about specific
interventions and outcomes of counseling sessions (Granello
& Hill, 2003; Krousel-Wood, 2000; Sexton, 1996). Despite
the financial implications of avoiding such accountability
measures, many mental health counselors have fought for
autonomy from third-party payers in the provision of coun-
seling services. Mental health counselors often indicate that
their ability to provide quality mental health care to clients is
hampered by managed care’s demands to demonstrate tech-
nical proficiency and cost-effective service delivery (Scheid,
2003). Furthermore, mental health counselors often express
concerns about their therapeutic decision-making capacity
being curtailed by managed care (Granello & Hill, 2003).
Managed care’s mandate for accountability in the field of
mental health counseling may have resulted, in part, from
counselors’ failure to initiate their own outcomes assessments
(Loesch, 2001). However, the emergence of empirically sup-
ported treatments (ESTs) has helped counselors respond to
the call for accountability from managed care (Herbert, 2003).
Specifically, ESTs draw on evidence-based practices from
empirical counseling research to provide counselors with
intervention guidelines and treatment manuals for specific
client problems. Yet, mental health counselors may resist the
use of such approaches, insisting that counseling procedures
and outcomes cannot be formally measured and that attempt-
ing such evaluations merely reduces time spent providing
counseling services (Sanderson, 2003). Today’s managed
care companies, however, may require counselors to base
their practice on specific ESTs in order to receive payment
for services. Further complicating the issue is the fact that,
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 165
The Accountability Bridge Model for Counselors
as previously noted with other areas of counseling, mental
health counselors often receive no training in evaluating the
outcomes and impact of their services (Granello & Hill, 2003;
Sexton et al., 1997). Ultimately, resistance from mental health
counselors to document counseling outcomes may be due to
insufficient counselor training in evaluation methods.
Despite the tumultuous history of the pressures brought
to bear on mental health practitioners by managed care for
accountability, there is a major impetus for shifting toward
examining program effectiveness and outcomes in mental
health counseling—the benefit of forging a professional
identity. Kelly (1996) underscored the need for mental health
counselors to be accepted as legitimate mental health provid-
ers who are on the same professional level as social workers,
psychologists, and psychiatrists. The ability to document
outcomes and identify effective treatments is, therefore, criti-
cal in furthering the professional identity of mental health
counselors within the mental health professions.
Accountability in Other Counseling Specialties
Although most literature on counseling accountability empha-
sizes school and mental health settings, calls for accountability
have also been directed to other counseling specialties. Bishop
and Trembley (1987) discussed the accountability pressures
faced in college counseling centers. Similar to school coun-
selors and mental health counselors, college counselors and
those in authority in college counseling centers have resisted
accountability demands placed on them by authorities in
higher education. Bishop and Trembley also noted that some
counselors have maintained that counseling centers are de-
signed for practice rather than research.
Ultimately, all counseling practitioners, despite their spe-
cialty area, are faced with the need to demonstrate program
effectiveness. Although counselors may be hesitant or unwill-
ing to evaluate the effectiveness of their services because they
see little relevance to their individual practice, the future of
the counseling profession may well be shaped by the way
practitioners respond to accountability demands.
Program Evaluation in Counseling
In recent years, the terms program evaluation and ac-
countability have often been used synonymously in dis-
cussions of counseling research and outcomes. However,
accountability efforts in counseling generally result from
external pressures to demonstrate eff iciency and effec-
tiveness. On the other hand, counselor-initiated program
evaluations can be used to better inform practice and
improve counseling services. We believe that a key shift
in the profession would be to have counselors continu-
ally evaluate their programs and outcomes not because
of external pressures, but from a desire to enhance client
services and to advocate for clients and the counseling
profession. New perspectives on the role of evaluation of
counseling practices may ultimately help program evalu-
ation become a standard of practice in counseling.
Program evaluation models have proliferated in the fields
of economics, political science, sociology, psychology, and
education (Hosie, 1994) and have been used for improving
quality (Ernst & Hiebert, 2002), assessing goal achieve-
ment, decision making, determining consumer impact, and
examining cost-effectiveness (Madaus & Kellaghan, 2000).
Many program evaluation models were developed for use in
large-scale organizational evaluations and are, thus, impracti-
cal for use by counselors. Furthermore, large-scale program
evaluation models are generally based on the assumption that a
staff of independent evaluation experts or an assessment team
will plan and implement the evaluation. Within the counsel-
ing professions, however, financial constraints generally
make such independent evaluations of programs unfeasible.
Consequently, counselors usually rely on limited resources
and their own research skills to carry out an evaluation of
program effectiveness. Fortunately, many of the principles
and practices of large-scale evaluation models can be adapted
for use by counselors.
Given the wide range of program evaluation definitions and
approaches, models from human services professions and edu-
cation appear most relevant for the needs of counselors because
these models generally emphasize ongoing evaluation for pro-
gram improvement (e.g., Stufflebeam, 2000a). Counseling pro-
gram evaluation may be defined as the ongoing use of evaluation
principles by counselors to assess and improve the effectiveness
and impact of counseling programs and services. Ongoing coun-
seling program evaluations can provide crucial feedback about
the direction and the growth of counseling services and can also
meet the accountability required by stakeholders (Boulmetis &
Dutwin, 2000; Loesch, 2001; Stufflebeam, 2000b).
Reasons for Evaluating Counseling Programs
Program evaluations may be initiated for various reasons;
however, evaluations are intended to generate practical in-
formation rather than to be mere academic exercises (Royse,
Thyer, Padgett, & Logan, 2001). Counseling program evalu-
ations should, therefore, provide concrete information about
the effectiveness, the efficiency, and the impact of services
(Boulmetis & Dutwin, 2000). Specifically, counseling pro-
gram evaluations can yield information that will demonstrate
the degree to which clients are being helped. Evaluations may
also provide feedback about client satisfaction and can help
to distinguish between effective and ineffective approaches
for the populations being served (Isaacs, 2003). On a broader
scope, program evaluations can help to determine if services
are having an influence on larger social problems (Royse et
al., 2001). On the contextual level, evaluations can provide
information about the use of staff and program resources in
the provision of services (Stufflebeam, 2000a).
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85166
Astramovich & Coker
Accountability to stakeholders has often been a consideration
in formulating approaches to counseling program evaluation. For
example, Lapan (2001) indicated that program evaluations help
counselors to identify effective services that are valued by stake-
holders. Thus, by using stakeholder feedback in program planning
and then providing valued services, counselors are better prepared
to demonstrate the accountability of their programs and practice.
Internal accountability may be requested by administrators of local
programs to determine if program staff and resources are being
used effectively. On the other hand, external accountability may
be requested by policy makers and stakeholders with an interest
in the effectiveness of provided services (Priest, 2001).
Counseling program evaluations are generally implemented to
provide information about local needs; however, in some instances
information from local evaluations may have significant implica-
tions for the entire counseling profession. As discussed by Whiston
(1996), the professional identity of counselors can be enhanced
through action research that demonstrates the effectiveness of ser-
vices. By conceptualizing program evaluations as a type of action
research, counselors have the potential to consider this effort as a
contribution to the growing research-base in counseling.
Questions That Evaluations May Answer
Counseling program evaluations, like all forms of evalua-
tions, are undertaken to answer questions about the effective-
ness of programs and services in meeting specific goals (Berk
& Rossi, 1999). Questions about the overall effectiveness
and impact of services may be answered, as well as more
discrete, problem-specific concerns. Furthermore, questions
posed in evaluations help guide the collection and analysis
of outcome information and the subsequent reporting of
outcomes to stakeholders.
Numerous questions may be explored with evaluations.
Powell, Steele, and Douglah (1996) indicated that evalu-
ation questions generally fall into four broad categories:
outcomes and impacts, program need, program context, and
program operations. The following are some examples of
the types of questions that counseling program evaluations
may answer:
• Are clients being helped?
• What methods, interventions, and programs are most
helpful for clients?
• How satisfied are clients with services received?
• What are the long-term effects of counseling programs
and services?
• What impact do the services and programs have on
the larger social system?
• What are the most effective uses of program staff?
• How well are program objectives being met?
Program evaluations are generally guided by specific
questions related to program objectives. Guiding questions
help counselors to plan services and gather data specific to
the problems under investigation. Depending on program and
stakeholder needs, counseling evaluations may be designed
to answer many questions simultaneously or they may be
focused on specific objectives and outcomes. As part of an
ongoing process, the initial cycle of a counseling program
evaluation may yield information that can help to define or
refine further problems and questions for exploration in the
next evaluation cycle.
Ultimately, counseling program evaluations may serve many
purposes and may provide answers to a variety of questions.
However, if counselors are to implement evaluations, a practical
framework for conceptualizing the evaluation process seems
essential. Counselors, thus, need a conceptual foundation for
guiding the evaluation of their programs and services.
The Accountability Bridge Counseling
Program Evaluation Model for Counselors
The Accountability Bridge Counseling Program Evaluation
Model (see Figure 1) provides a framework to be used by
individual counselors and within counseling programs and
counseling agencies to plan and deliver counseling services
and to assess their effectiveness and impact. Drawing on
concepts from the business evaluation model proposed by
Ernst and Hiebert (2002) and the Context, Input, Process,
Product Model (CIPP) developed by Stufflebeam (2000a),
the Accountability Bridge Counseling Program Evaluation
Model organizes counseling evaluation into two reoccur-
ring cycles that represent a continual refinement of services
based on outcomes, stakeholder feedback, and the needs of
the populations served. The counseling program evaluation
cycle focuses on the provision and outcomes of counseling
services, whereas the counseling context evaluation cycle ex-
amines the impact of counseling services on stakeholders and
uses their feedback, along with the results yielded by needs
assessments, to establish and refine the goals of counseling
programs. The two cycles are connected by an “accountability”
bridge, whereby results from counseling practices are com-
municated to stakeholders within the context of the larger
service system. Providing accountability to stakeholders is,
therefore, an integral part of the model. Although it is beyond
the scope of this article to discuss each component in depth, a
basic review of the framework and principles of the model will
help counselors begin to conceptualize the process of planning
and implementing counseling program evaluations.
Counseling Program Evaluation Cycle
The counseling program evaluation cycle involves the planning
and implementation of counseling practice and culminates with
assessing the outcomes of individual and group counseling,
guidance services, and counseling programs. Four stages are
involved in the counseling program evaluation cycle.
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 167
The Accountability Bridge Model for Counselors
1. Program planning. Although we enter the discussion of
the model at the program planning stage, information obtained
from the counseling context evaluation cycle is critical in the
planning process. Thus, on the basis of input obtained from
needs assessments and the subsequent formation of service
objectives, counseling programs and services are planned
and developed to address the needs of the populations served.
Program planning involves identifying specific counsel-
ing methods and activities that are appropriate for certain
populations as well as determining the availability of needed
resources, including staff, facilities, and special materials
(Royse et al., 2001).
Lapan (2001) stressed that effective school counseling
programs meet objectives by planning results-based inter-
ventions that can be measured. Therefore, a key component
of the program planning process involves the simultaneous
planning of methods for measuring outcomes (Boulmetis &
Dutwin, 2000). For instance, during the program planning
phase, a community counseling agency that is planning a
new substance abuse aftercare program should determine
the means of assessing client progress through the program.
Furthermore, developing multiple outcome measures can
help increase the validity of findings. Gysbers and Hender-
son (2000) discussed several means for assessing school
counseling outcomes, including pretest–posttest instruments,
performance indicators, and checklists. Studer and Sommers
(2000) indicated that multiple measures, such as assessment
instruments, observable data, available school-based data,
and client/parent/teacher interviews, could be used in school
counseling program evaluation. In mental health and college
counseling specialties, similar measures of client and program
progress can be used, including standardized assessment tools
such as depression and anxiety inventories. Other means
of collecting outcome data include surveys, individual and
group interviews, observation methods, and document review
(Powell et al., 1996). Furthermore, data can be collected over
a 1- to 3-year period to determine program effectiveness over
longer periods of time (Studer & Sommers, 2000).
A f inal consideration in the program planning stage
involves determining when clients will complete selected
measures and assessments . Individuals who will be respon-
sible for gathering and processing the information should be
identified as well. For example, in a community agency setting,
counselors may take responsibility for collecting data about
their own client caseload, whereas a counselor supervisor may
collect data from community sources.
2. Program implementation. After programs and services
have been planned and outcome measures have been selected,
programs and services are initiated. Sometimes referred to as
“formative evaluation,” the program implementation phase
actualizes the delivery of services shaped by input from the
counseling context evaluation cycle. During program imple-
mentation, counselors may identify differences between the
planned programs and the realities of providing the services.
Therefore, at this point, decisions may be made to change
programs before they are fully operational or to make refine-
ments in programs and services as the need arises.
3. Program monitoring and refinement. Once programs and
services have been initiated and are fully operational, coun-
selors may need to make adjustments to their practice based
on preliminary results and feedback from clients and other
interested parties. Programs and services may, therefore, need
to be refined and altered to successfully meet the needs of the
clientele served. Monitoring program success helps to ensure
the quality of counseling services and maximizes the likelihood
of finding positive results during outcomes assessments.
4. Outcomes assessment. As programs and services are
completed, outcomes assessments help to determine if objec-
FIGURE 1
Accountability Bridge Counseling Program Evaluation Model
Program
Monitoring and
Refinement
Feedback
From
Stakeholders
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85168
Astramovich & Coker
tives have been met. Therefore, during the outcomes assessment
phase, final data are collected, and all program data are analyzed
to determine the outcomes of interventions and programs.
Counseling outcome data should be analyzed and interpreted as
soon as possible after being collected (Gysbers & Henderson,
2000). Data analysis approaches differ for quantitative and
qualitative data, and counselors with limited research back-
ground may need to seek assistance from peers and supervisors
with knowledge of analyzing a variety of data sets. Available
data analysis computer software can also expedite the analysis
and interpretation of data. Such software programs also allow
for easy creation of charts and graphs that can play a key role
in the dissemination of evaluation results.
The Accountability Bridge
We conceptualize the process of communicating outcome data
and program results to stakeholders as the “accountability
bridge” between counseling programs and the context of
counseling services. Outcome data and evaluation findings
are the means for providing information about program ef-
fectiveness to stakeholders. When counselors are asked to
demonstrate program effectiveness and efficiency, they can
present information from the counseling program evaluation
cycle to interested parties. However, beyond being merely an
ameliorative process, communicating results to stakehold-
ers can also be conceptualized as a marketing tool whereby
counselors help maintain support and increase the demands for
their services (Ernst & Hiebert, 2002). Therefore, rather than
waiting for external requests for accountability, counselors
should consider the task of communicating program results
to stakeholders as being a standard part of the counseling
program evaluation process.
In the program evaluation literature, stakeholders are often
referred to as “interested parties” (Berk & Rossi, 1999), mean-
ing all individuals and organizations involved in or affected
by a program (Boulmetis & Dutwin, 2000). As discussed by
Loesch (2001), the most obvious stakeholders in counseling
programs are those clients receiving services. In addition,
stakeholders of counseling programs may include funding
sources, other professional counselors, community members,
administrators, staff, and organizations or programs that refer
clients. Information provided to stakeholders must be tailored
to address the concerns of the specific group. For instance,
when communicating results, counselors may want to consider
if their audience will be more impressed with numbers and
statistics or if case studies and personal narratives will be
more effective (Powell et al., 1996).
Evaluation reports and summaries can be used to dissemi-
nate information about program outcomes to stakeholders.
Counseling program evaluation reports may be structured to
include (a) an introduction defining the purposes and goals of
programs and of the evaluation, (b) a description of programs
and services, (c) a discussion of the evaluation design and
data analysis procedures, (d) a presentation of the evaluation
results, and (e) a discussion of the findings and recommenda-
tions of the evaluation (Gysbers & Henderson, 2000; Royse et
al., 2001). In addition to written reports, formal presentations
of program results may also be an effective means for fulfilling
the requirement of accountability to stakeholders.
Counseling Context Evaluation Cycle
The counseling context evaluation cycle focuses on the im-
pact that the counseling practice has on stakeholders in the
context of the larger organizational system. Using feedback
from stakeholders, counselors and individuals responsible for
counseling programs may engage in strategic planning and
conduct needs assessments to develop and refine program
objectives. The counseling context evaluation cycle consists
of four stages.
1. Feedback from stakeholders. Once outcome data have
been reported to stakeholders, counselors should actively
solicit their feedback. Indeed, stakeholder feedback should
be considered a vital element in the eventual design and
delivery of counseling services. Viability of counseling ser-
vices is maintained through a continual cycle of stakeholder
feedback regarding the development of program goals and
the design and evaluation of counseling services (Ernst &
Hiebert, 2002).
2. Strategic planning. After feedback from stakeholders
has been solicited, counselors and individuals in their orga-
nizational systems may engage in strategic planning designed
to examine the operations of the organization. In particular,
strategic planning may include an examination and possible
revision of the purpose and mission of programs and services.
Furthermore, during strategic planning, decisions about the al-
location of staff and monetary resources may be considered.
3. Needs assessment. Coinciding with strategic planning,
needs assessments can help provide counselors with crucial
information that shapes the provision of counseling programs
and services. In particular, identifying the needs of stakehold-
ers is a key part of developing programs that will have positive
impact. Needs assessments should, therefore, gather informa-
tion from multiple stakeholders and should be planned with
a clear indication of what information is needed (Royse et
al., 2001; Stufflebeam, McCormick, Brinkerhoff, & Nelson,
1985). A key part of needs assessment is the development of
the method or instrument for collecting information. Written
surveys and checklists can be used as well as focus-group
meetings, interviews, and various forms of qualitative inquiry.
Effective needs assessments will help clarify and prioritize
needs among stakeholders and the populations served.
4. Service objectives. Developing precise program goals
and objectives is crucial for the eventual provision and evalua-
tion of counseling programs and services. Goals and objectives
should be developed based on prior outcomes of counseling
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 169
The Accountability Bridge Model for Counselors
services, stakeholder feedback, and information gathered from
needs assessments. Programs without clearly identified goals
and objectives cannot be evaluated for impact and effective-
ness (Berk & Rossi, 1999). Royse et al. (2001) discussed two
main types of program objectives: process objectives and
outcome objectives. Process objectives may be thought of as
milestones or competencies needed for achieving long-term
goals. In counseling, process objectives may be considered as
a series of benchmarks that indicate progress toward program
growth and improvement. Process objectives are achieved
through a series of developmental steps, whereas outcome
objectives refer to specific competencies or outcomes to be
achieved in a given time period.
Once program objectives have been established, the entire
evaluation cycle is repeated, with information from the coun-
seling context evaluation cycle feeding back into the program
planning stage of the counseling program evaluation cycle.
Ultimately, counseling program evaluation should be consid-
ered an ongoing process rather than a single incident.
Implications for Counselors and
Counselor Education
Meeting the Challenges of Counseling
Program Evaluations
Although counseling program evaluation may enhance client
services and promote the professional identity of counselors,
barriers to implementing program evaluation cannot be over-
looked. First of all, program evaluation practices have often
been considered as being too time-consuming and complex
(Loesch, 2001; Wheeler & Loesch, 1981). Thus, counselors
who have not previously initiated evaluations of their programs
and services may be hesitant to embark on a seemingly difficult
task. However by conceptualizing program evaluation as a
collaborative process, counselors may be more interested and
motivated to participate in evaluations. By teaming with other
professionals, counselors may help to ensure that evaluations are
implemented effectively and that results are disseminated in an
effective manner. Furthermore, collaboration helps counselors
new to program evaluation to obtain support and mentoring
during the evaluation process (Trevisan, 2002a).
Another major obstacle to any outcome or evaluation
study of counseling is the complex and dynamic nature of the
counseling process itself. As discussed by Whiston (1996),
the seemingly immeasurable nature of counseling often makes
straightforward evaluations of its effectiveness difficult. The
complexity of counseling processes may be addressed by
developing program and service objectives that are more
readily measurable. For example, client improvement is a
concept that seems vague and difficult to measure. However,
by being more specific and operationalizing definitions of
client improvement, counselors can more easily measure cli-
ent change. For example, exploring client improvement using
standardized measures of depression by comparing pre- and
posttreatment scores can provide counselors with one measure
of the effectiveness of counseling interventions.
Considerations for Training and Research
in Program Evaluation Methods
Despite increased focus on accountability and calls for
evaluation-based counseling practice, counselors frequently
lack the training to effectively evaluate the effectiveness and
impact of their services. Counselor training has rarely em-
phasized research and evaluation skills as a method for guid-
ing practice (Heppner et al., 1999; Sexton et al., 1997). As a
result, counselors may see little utility in acquiring and using
research and evaluation skills. Counselor educators who are
responsible for counselor education programs must, there-
fore, reconsider the importance placed on acquiring research
and evaluation skills in the training of new counselors. The
2001 standards of the Council for Accreditation of Counsel-
ing and Related Educational Programs have addressed the
need for today’s counselors to develop skills in research and
evaluation. Yet, as pointed out by Trevisan (2000), the mere
inclusion of evaluation skills in training standards has not
spurred counselors’ use of evaluation activities.
Whiston and Coker (2000) called for reconstructing the
clinical training of counselors based on findings in counseling
research. Integrating evaluation and research practices into
clinical training may likewise enhance the clinical preparation
of new counselors by giving them supervised experiences in
which they use evaluation methods. Trevisan (2000, 2002a)
advocated for a sequential approach to teaching program eval-
uation skills in counselor education programs. Accordingly,
counselors might first receive didactic training in evaluation
and research methods. Next, counselors could be given clinical
experiences that would allow them to implement research and
evaluation skills under supervision. Finally, trained counselors
would be able to conceptualize and implement evaluations
of counseling programs on their own, consulting with other
professionals as necessary.
In addition to revising the evaluation and research train-
ing in counselor education, providing postgraduate training
and workshop opportunities to practicing counselors must be
considered. Counseling conferences should, therefore, actively
solicit programs and presentations geared toward helping
counselors develop skills in research and evaluation. Further-
more, counselors should purposefully seek opportunities for
the development of their research and evaluation skills.
Although counseling program evaluation has been dis-
cussed for many years, few studies have appeared in the
literature that examine the use of program evaluation by prac-
ticing counselors. We, therefore, issue a call to the profession
to systematically investigate the use of evaluation practices
in counseling. Such findings could have a substantial impact
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85170
Astramovich & Coker
on the continued development of the counseling profession
by providing further understanding of counseling program
evaluation models and practices.
Conclusion
Twenty-first century counselors can no longer question the
merit of and need for evaluating their counseling programs
and services. Instead, today’s counselors must actively learn
about and use evaluation methods as a means of enhanc-
ing their counseling practices, providing accountability to
stakeholders, and enhancing the professional identity of
all counselors. As Wheeler and Loesch (1981) predicted
nearly 25 years ago, program evaluation continues to be
a force in the development of the counseling professions.
They likewise suggested that counseling professionals are
gradually beginning to recognize that if counseling program
evaluations are to be used, they must be initiated and imple-
mented by counselors themselves. Given the persistence
of the topic and the ongoing calls for outcomes research
and accountability of counseling practices, program evalu-
ation can no longer be ignored by counseling professionals.
Indeed, program evaluation may be considered a newly
evolving standard of practice in counseling.
References
Adelman, H. S. (2002). School counselors and school reform: New
directions. Professional School Counseling, 5, 235–248.
Albrecht, S. F., & Joles, C. (2003). Accountability and access to op-
portunity: Mutually exclusive tenets under a high-stakes testing
mandate. Preventing School Failure, 48, 86–91.
American School Counselor Association. (2003). The American
School Counselor Association National Model: A framework for
school counseling programs. Alexandria, VA: Author.
Benkofski, M., & Heppner, C. C. (1999). Program evaluation. In P. P.
Heppner, D. M. Kivlighan, & B. E. Wampold, Research design in
counseling (pp. 488–513). Belmont, CA: Wadsworth.
Berk, R. A., & Rossi, P. H. (1999). Thinking about program evalu-
ation (2nd ed.). Thousand Oaks, CA: Sage.
Bishop, J. B., & Trembley, E. L. (1987). Counseling centers and
accountability: Immoveable objects, irresistible forces. Journal
of Counseling and Development, 65, 491–494.
Borders, L. D. (2002). School counseling in the 21st century: Personal and
professional reflections. Professional School Counseling, 5, 180–185.
Borders, L. D., & Drury, S. M. (1992). Comprehensive school
counseling programs: A review for policymakers and practitio-
ners. Journal of Counseling & Development, 70, 487–498.
Boulmetis, J., & Dutwin, P. (2000). The ABCs of evaluation: Timeless
techniques for program and project managers. San Francisco:
Jossey-Bass.
Council for Accreditation of Counseling and Related Educational
Programs. (2001). CACREP accreditation manual. Alexandria,
VA: Author.
Dahir, C. A., & Stone, C. B. (2003). Accountability: A M.E.A.S.U.R.E.
of the impact school counselors have on student achievement.
Professional School Counseling, 6, 214–221.
Erford, B. T., House, R., & Martin, P. (2003). Transforming the school
counseling profession. In B. T. Erford (Ed.), Transforming the
school counseling profession (pp. 1–20). Upper Saddle River,
NJ: Prentice Hall.
Ernst, K., & Hiebert, B. (2002). Toward the development of a program
evaluation business model: Promoting the longevity of counsel-
ling in schools. Canadian Journal of Counselling, 36, 73–84.
Fairchild, T. N. (1993). Accountability practices of school
counselors: 1990 national survey. The School Counselor, 40,
363–374.
Fairchild, T. N. (1994). Evaluation of counseling services: Account-
ability in a rural elementary school. Elementary School Guidance
and Counseling, 29, 28–37.
Fairchild, T. N., & Seeley, T. J. (1995). Accountability strategies for
school counselors: A baker’s dozen. The School Counselor, 42,
377–392.
Finn, C. E. (2002). Making school reform work. The Public Inter-
est, 148, 85–95.
Gandal, M., & Vranek, J. (2001, September). Standards: Here today,
here tomorrow. Educational Leadership, 6–13.
Granello, D. H., & Hill, L. (2003). Assessing outcomes in practice
settings: A primer and example from an eating disorders program.
Journal of Mental Health Counseling, 25, 218–232.
Gysbers, N. C., & Henderson, P. (2000). Developing and managing
your school guidance program (3rd ed.). Alexandria, VA: Ameri-
can Counseling Association.
Gysbers, N. C., Hughey, K., Starr, M., & Lapan, R. T. (1992). Im-
proving school guidance programs: A framework for program,
personnel, and results evaluation. Journal of Counseling &
Development, 70, 565–570.
Hadley, R. G., & Mitchell, L. K. (1995). Counseling research and
program evaluation. Pacific Grove, CA: Brooks/Cole.
Heppner, P. P., Kivlighan, D. M., & Wampold, B. E. (1999). Research
design in counseling (2nd ed.). Belmont, CA: Wadsworth.
Herbert, J. D. (2003). The science and practice of empirically sup-
ported treatments. Behavior Modification, 27, 412–430.
Herr, E. L. (2002). School reform and perspectives on the role of
school counselors: A century of proposals for change. Profes-
sional School Counseling, 5, 220–234.
Hosie, T. (1994). Program evaluation: A potential area of exper-
tise for counselors. Counselor Education and Supervision, 33,
349–355.
House, R. M., & Hayes, R. L. (2002). School counselors: Becoming
key players in school reform. Professional School Counseling,
5, 249–256.
House, R. M., & Sears, S. J. (2002). Preparing school counselors to
be leaders and advocates: A critical need in the new millennium.
Theory Into Practice, 41, 154–162.
Houser, R. (1998). Counseling and educational research: Evaluation
and application. Thousand Oaks, CA: Sage.
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 171
The Accountability Bridge Model for Counselors
Hughes, D. K., & James, S. H. (2001). Using accountability data to
protect a school counseling program: One counselor’s experience.
Professional School Counseling, 4, 306–309.
Isaacs, M. L. (2003). Data-driven decision making: The engine of
accountability. Professional School Counseling, 6, 288–295.
Kellaghan, T., & Madaus, G. F. (2000). Outcome evaluation. In D.
L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation
models: Viewpoints on educational and human services evalua-
tion (2nd ed., pp. 97–112). Boston: Kluwer Academic.
Kettner, P. M., Moroney, R. M., & Martin, L. L. (1999). Designing
and managing programs: An effectiveness-based approach (2nd
ed.). Thousand Oaks, CA: Sage.
Kelly, K. R. (1996). Looking to the future: Professional identity,
accountability, and change. Journal of Mental Health Counsel-
ing, 18, 195–199.
Kirst, M. W. (2000). Accountability: Implications for state and local
policy makers. In D. L. Stufflebeam, G. F. Madaus, & T. Kel-
laghan (Eds.), Evaluation models: Viewpoints on educational
and human services evaluation (2nd ed., pp. 319–339). Boston:
Kluwer Academic.
Krousel-Wood, M. A. (2000). Outcomes assessment and performance
improvement: Measurements and methodologies that matter in
mental health care. In P. Rodenhauser (Ed.), Mental health care
administration: A guide for practitioners (pp. 233–253). Ann
Arbor: University of Michigan Press.
Lapan, R. T. (2001). Results-based comprehensive guidance and
counseling programs: A framework for planning and evaluation.
Professional School Counseling, 4, 289–299.
Loesch, L. C. (2001). Counseling program evaluation: Inside and outside
the box. In D. C. Locke, J. E. Myers, & E. L. Herr (Eds.), The hand-
book of counseling (pp. 513–525). Thousand Oaks, CA: Sage.
Lusky, M. B., & Hayes, R. L. (2001). Collaborative consultation and pro-
gram evaluation. Journal of Counseling & Development, 79, 26–38.
Madaus, G. F., & Kellaghan, T. (2000). Models, metaphors, and
definitions in evaluation. In D. L. Stufflebeam, G. F. Madaus, & T.
Kellaghan (Eds.), Evaluation models: Viewpoints on educational
and human services evaluation (2nd ed., pp. 19–31). Boston:
Kluwer Academic.
Myrick, R. D. (2003). Accountability: Counselors count. Professional
School Counseling, 6, 174–179.
No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115
Stat. 1425 (2002).
Otwell, P. S., & Mullis, F. (1997). Academic achievement and
counselor accountability. Elementary School Guidance and
Counseling, 31, 343–348.
Paisley, P. O., & Borders, L. D. (1995). School counseling: An
evolving specialty. Journal of Counseling & Development, 74,
150–153.
Powell, E. T., Steele, S., & Douglah, M. (1996). Planning a program
evaluation. Madison: Division of Cooperative Extension of the
University of Wisconsin-Extension.
Priest, S. (2001). A program evaluation primer. Journal of Experi-
ential Education, 24, 34–40.
Royse, D., Thyer, B. A., Padgett, D. K., & Logan, T. K. (2001).
Program evaluation: An introduction (3rd ed.). Belmont, CA:
Brooks/Cole.
Sanderson, W. C. (2003). Why empirically supported treatments are
important. Behavior Modification, 27, 290–299.
Scheid, T. L. (2003). Managed care and the rationalization of men-
tal health services. Journal of Health and Social Behavior, 44,
142–161.
Schmidt, J. J. (1995). Assessing school counseling programs through
external reviews. The School Counselor, 43, 114–123.
Sexton, T. L. (1996). The relevance of counseling outcome research:
Current trends and practical implications. Journal of Counseling
& Development, 74, 590–600.
Sexton, T. L. (1999). Evidence-based counseling: Implications
for counseling practice, preparation, and professionalism.
Greensboro, NC: ERIC Clearinghouse on Counseling & Stu-
dent Services. (ERIC Document Reproduction Service No.
ED 435 948)
Sexton, T. L., Whiston, S. C., Bleuer, J. C., & Walz, G. R.
(1997). Integrating outcome research into counseling prac-
tice and training. Alexandria, VA: American Counseling
Association.
Sink, C. A., & MacDonald, G. (1998). The status of comprehensive
guidance and counseling in the United States. Professional School
Counseling, 2, 88–94.
Studer, J. R., & Sommers, J. A. (2000). The professional school
counselor and accountability. National Association of Secondary
School Principals Bulletin, 84, 93–99.
Stufflebeam, D. L. (2000a). The CIPP model for evaluation.
In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.),
Evaluation models: Viewpoints on educational and human
services evaluation (2nd ed., pp. 279–317). Boston: Kluwer
Academic.
Stufflebeam, D. L. (2000b). Foundational models for 21st century
program evaluation. In D. L. Stufflebeam, G. F. Madaus, & T.
Kellaghan (Eds.), Evaluation models: Viewpoints on educational
and human services evaluation (2nd ed., pp. 33–96). Boston:
Kluwer Academic.
Stufflebeam, D. L., McCormick, C. H., Brinkerhoff, R. O., & Nelson,
C. O. (1985). Conducting educational needs assessment. Boston:
Kluwer Academic.
Trevisan, M. S. (2000). The status of program evaluation expectations
in state school counselor certification requirements. American
Journal of Evaluation, 21, 81–94.
Trevisan, M. S. (2001). Implementing comprehensive guidance pro-
gram evaluation support: Lessons learned. Professional School
Counseling, 4, 225–228.
Trevisan, M. S. (2002a). Enhancing practical evaluation training
through long-term evaluation projects. American Journal of
Evaluation, 23, 81–92.
Trevisan, M. S. (2002b). Evaluation capacity in K-12 school
counseling programs. American Journal of Evaluation, 23,
291–305.
Journal of Counseling & Development ■ Spring 2007 ■ Volume 85172
Astramovich & Coker
Vacc, N. A., & Rhyne-Winkler, M. C. (1993). Evaluation and ac-
countability of counseling services: Possible implications for a
midsize school district. The School Counselor, 40, 260–266.
Wheeler, P. T., & Loesch, L. (1981). Program evaluation and counsel-
ing: Yesterday, today and tomorrow. The Personnel and Guidance
Journal, 51, 573–578.
Whiston, S. C. (1996). Accountability through action research:
Research methods for practitioners. Journal of Counseling &
Development, 74, 616–623.
Whiston, S. C. (2002). Response to the past, present, and future of
school counseling: Raising some issues. Professional School
Counseling, 5, 148–156.
Whiston, S. C., & Coker, J. K. (2000). Reconstructing clinical
training: Implications from research. Counselor Education and
Supervision, 39, 228–253.
Whiston, S. C., & Sexton, T. (1998). A review of school counseling
outcome research: Implications for practice. Journal of Counsel-
ing & Development, 76, 412–426.