Methodological Issues Article Review
Read the following articles, which can be accessed through the ProQuest database in the Ashford University Library:
- Evidence-based practice in psychology: Implications for research and research training.
- Practice-based evidence: Back to the future.
- Psychological treatments: Putting evidence into practice and practice into evidence.
Write a three- to four-page article review in which you discuss methodological issues unique to psychological research and analyze basic applied psychological research relevant to the treatment of mental disorders. In your paper, you will discuss the topics of evidence-based practice and practice-based evidence and their roles in providing practitioners useful information for making decisions about appropriate mental health treatments.
In the body of your paper:
- Discuss the methodological issues and challenges that are unique to psychological research investigating effective treatments for psychological disorders.
- Explain the concepts of evidence-based practice and practice-based evidence and identify controversies associated with these concepts.
- Select one treatment modality associated with a disorder in the DSM-5 and present at least one example of pertinent, applied psychological research investigating the efficacy of the treatment modality. Discuss the findings of the research. Locate at least one peer-reviewed article that contains a research study on a treatment modality to fulfill this requirement. You may not use any of the course materials.
- Take the point of view of Bauer (2007) to analyze the article(s) you selected in #3. Using this author’s arguments from his Evidence-Based Practice in Psychology: Implications for Research and Research Training article, what would be his evaluation of the article(s) you selected?
- Take the point of view of Brendtro, Mitchell, & Doncaster (2011) and analyze the article(s) you selected in #3. Using these authors’ arguments from their Practice-Based Evidence: Back to the Future article, what would be their evaluation of the article(s) you selected?
- Discuss ways in which an evidence-based practice model might provide practitioners useful information for making decisions about the degree to which the treatment modality you selected in #3 is an appropriate treatment for the disorder you specified.
- Conclude your paper with a discussion of your opinion of the utility of evidence-based practice and practice-based evidence for practitioners needing to identity effective treatments for psychological disorders.
- Utilize a minimum of two additional peer-reviewed journal articles published within the last five years (not including the course text or any of the course materials). At least one article must be used to satisfy the requirement in #3, and at least one article must also be included to support your arguments. All sources must be documented in APA style, as outlined by the Ashford Writing Center.
Writing the Methodological Issues Article Review
The Assignment:
- Must be three to five double-spaced pages in length, and formatted according to APA style as outlined in the Ashford Writing Center.
- Must include a title page with the following:
Title of paper
Student’s name
Course name and number
Instructor’s name
Date submitted - Must begin with an introductory paragraph that has a succinct thesis statement. (Refer to the Ashford Writing Center Thesis Generator (Links to an external site.)Links to an external site. ).
- Must address the topic of the paper with critical thought.
- Must end with a conclusion that summarizes your opinion of the utility of evidence-based practice and practice-based evidence for practitioners needing to identity effective treatments for psychological disorders.
- Must utilize each of the three required articles: Bauer (2007); Brendtro, Mitchell, & Doncaster (2011) and Dozois (2013).
- Must utilize a minimum of two additional peer-reviewed sources published within the last five years (not including the course text).
- Must document all sources in APA style, as outlined in the Ashford Writing Center.
- Must include a separate reference page, formatted according to APA style as outlined in the Ashford Writing Center.
Evidence-Based Practice in Psychology:
Implications for Research and Research Training
�
Russell M. Bauer
University of Florida
In this article, the author discusses the implications of evidence-based
practice (EBP) for research and research training in clinical psychology. It
is argued that EBP provides a useful framework for addressing some here-
tofore ignored problems in clinical research. Advancing evidence-based
psychological practice will require educators to inject significant new con-
tent into research, design, and methodology courses and to further inte-
grate research and practicum training. The author believes this to be an
exciting opportunity for the field, not only because it will further psychol-
ogists’ integration into the interdisciplinary health care and research envi-
ronment, but also because it will provide new tools to educate students for
capable, not just competent professional activity. © 2007 Wiley Periodi-
cals, Inc. J Clin Psychol 63: 685–694, 2007.
Keywords: education and training; research
In recent years, the notion that psychologists deliver “health care” rather than just “men-
tal health care” has taken hold in our field. Along with this identification as a health
care discipline comes a set of responsibilities to provide patients with clinical services
that have been shown through research to be effective for addressing patient problems.
The fundamental goal of the evidence-based practice movement (EBP) is to effect a
cultural change within health care whereby practitioners will make “conscious, explicit,
and judicious” use of current best evidence in clinical practice with individual patients
(Mayer, 2004; Straus, Richardson, Glasziou, & Haynes, 2005). The contemporary empha-
sis on EBP is quite strong within other health care disciplines, where it has permeated
the culture of education, practice, and research, and where it is seen as furnishing at
least a partial answer to a fundamental call for accountability and continuous quality
Correspondence concerning this article should be addressed to: Russell M. Bauer, Department of Clinical and
Health Psychology, University of Florida, P.O. Box 100165 HSC, Gainesville, FL 32610-0165; e-mail:
rbauer@phhp.ufl.edu
JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 63(7), 685–694 (2007) © 2007 Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/jclp.20374
improvement in the overall system of health care delivery in the United States (Institute
of Medicine, 2001).
Most psychologists understand that EBP refers to a process by which best evidence
is used intentionally in making decisions about patient care. Psychologists are most famil-
iar with the construct of best evidence in the context of the empirically supported treat-
ment movement, but some may mistakenly believe that EBP and empirically supported
treatment (EST) are synonymous. As other articles in this series make clear, they are not;
EBP is a much broader concept that refers to knowledge and action in the three essential
elements of patient encounters: (a) the best evidence guiding a clinical decision (the best
evidence domain), (b) the clinical expertise of the health care professional to diagnose
and treat the patient’s problems (the clinical expertise domain), and (c) the unique pref-
erences, concerns and expectations that the patient brings to the health care setting (the
client domain). These three elements are often referred to as the three pillars of EBP.
Even a brief consideration of the many variables and mechanisms involved in the three
pillars will lead the clinical psychologist to an obvious conclusion: EBP not only pro-
vides a framework for conceptualizing clinical problems, but also suggests a research
agenda whereby patterns of wellness and illness are investigated with an eye toward how
best practices are potentially mediated by unique aspects of practitioner expertise. In
addition, how key patient characteristics influence treatment acceptability and help define
what role the patient plays in the health care relationship are highlighted.
This is not a new agenda, but is quite similar to the agenda set forth by Gordon Paul
in 1969 in his now-famous ultimate clinical question, “What treatment, by whom, is most
effective for this individual, with that specific problem, under which set of circum-
stances, and how does it come about?” (Paul, 1967, 1969, p. 44). In asking this question,
Paul’s goal was to draw attention to variables that needed to be described, measured, or
controlled for firm evidence to accumulate across studies of psychotherapy. The agenda
for evidence-based psychological practice is similar, though broader, encompassing assess-
ment as well as treatment, psychological healthcare policy as well as clinical procedure,
and populations as well as individuals. As such, expanding the scope of evidence-based
psychological practice provides an opportunity for psychologists to build conceptual and
methodological bridges with their colleagues in medicine, nursing, pharmacy, health pro-
fessions, and public health.
Although its status as a health care delivery process is typically emphasized, EBP,
initially referred to as evidence-based medicine, evolved at McMaster University as a
pedagogical strategy for teaching students and practitioners how to incorporate research
results into the process of patient care (McCabe, 2006; Sackett, Rosenberg, Gray, Haynes,
& Richardson, 1996). As professional psychology begins to seriously consider the rele-
vance of EBP for broad aspects of practice (Davidson & Spring, 2006), we will have to
grapple with some obvious implications for (a) how we conduct practice-based research,
and (b) how we educate and train our students, the next cadre of clinical researchers, to
develop the knowledge, skills, and expertise to contribute to the evidence base. In this
article, I discuss some of these implications with an eye toward viewing the EBP move-
ment as an opportunity to begin to answer some of our most difficult research questions,
and to begin to address some of our most vexing and persistent problems in education and
training.
Practice-Based Research
From a research perspective, EBP provides a framework for investigating heretofore
neglected aspects of “rubber-meets-the-road” practice. That is, confronting gaps in the
686 Journal of Clinical Psychology, July 2007
Journal of Clinical Psychology DOI 10.1002/jclp
evidence base from an EBP perspective draws attention to key client variables (e.g.,
preferences for one treatment over another, ability/willingness to adhere to treatment,
credibility of treatment rationales, demographic and socioeconomic variables that enhance
or impede health care access or that contribute to attitudes about treatment acceptability)
and dimensions of clinical expertise (e.g., the ability to deliver the appropriate EST for
the patient’s problem, the ability to adapt treatments to unique clients, the ability to
deliver assessments appropriate to decision-making, the ability to communicate effec-
tively with patient) that deserve empirical study. Practitioners face these gaps because our
dominant research paradigms tend to yield data about homogeneous majority groups
receiving standard treatment in optimal settings.
Thus far, most of what constitutes evidence-based psychological practice is in the
area of empirically supported treatment (Chambless, 1995; Chambless et al., 1998). Cur-
rently, there are several psychological therapies with well-established efficacy for treat-
ment of a variety of psychological problems (American Psychological Association [APA]
Division 12 Dissemination Subcommittee of the Committee on Science and Practice).
Continued expansion of this list to include new therapies and clinical problems, and
demonstrating the portability of well-controlled efficacy studies to real world problems
(effectiveness) is continuing apace (Chambless & Ollendick, 2001).
A parallel expansion of the evidence base for psychological assessment procedures is
needed. More research is needed regarding the diagnostic utility of assessment tools in
predicting at-risk status, in helping select which treatment is indicated, or in predicting
treatment response. Even in areas where the evidence for the clinical utility of assessment
procedures is strong (e.g., in surgical epilepsy, where the results of presurgical evaluation
of verbal memory strongly predict which patients will develop postsurgical neuropsycho-
logical morbidity; Chelune, 1995) the best available evidence has not yet caused the
majority of clinicians to modify their assessment approach accordingly.
A full instantiation of EBP in psychology will require an expansion of systematic
research efforts that will provide us with more information about the clinical expertise
and patient domains. This represents a real opportunity to broaden the scope of EBP in
psychology. How do psychological practitioners with varying levels of expertise decide
which of a number of alternative treatments to utilize in the clinic? What factors make
clinically efficacious treatments acceptable to patients? How does cultural diversity inter-
act with treatment acceptability? To apply best evidence to individual clinical problems
seamlessly, we need to develop a research agenda that allows us to retrieve and analyze
answers to these kinds of questions. This is a daunting task, and one that seems intracta-
ble from the point of view of our exclusive reliance on quantitative research methods and
controlled experiments. Perhaps this is an area in which increased knowledge of qualita-
tive research methods (see below) would be beneficial for the field. This is an area to
which practicing scientist–practitioners can provide critical information by adopting a
data-driven approach to practice that incorporates measurement and reporting of assess-
ment and treatment outcomes for purposes of further addressing effectiveness questions.
Implications for Education and Training
In a recent survey on training in ESTs, Woody, Weisz, and McLean (2005) reported that,
although many doctoral training programs provided didactic dissemination of EST-
related information, actual supervised training in ESTs had declined compared to a sim-
ilar survey conducted in 1993. The overall conclusion was that the field had a long way
to go in insuring that our students have sufficient skill and experience to practice EST in
their professional lives. The authors cited several obstacles to training in ESTs, including
Evidence-Based Practice and Research 687
Journal of Clinical Psychology DOI 10.1002/jclp
(a) uncertainty about what it means to train students in EBP; (b) insufficient time to
provide specific training in multiple ESTs given other training priorities, including research;
(c) within-program shortages of trained supervisors needed to provide a truly broad EST
training experience; and (d) philosophic opposition to what some perceive as an overly
rigid, manualized approach to treatment that reduces professional psychological practice
to technician status. It seems obvious to me that most of these barriers imply a method of
training in which competency in ESTs is built one treatment at a time, thus requiring large
investments of time and faculty effort to the cause. Although it is true that students need
practical training in a variety of clinical methods, one key issue is whether a goal of
graduate education is to train students to competency in a critical number of ESTs, or
whether the goal is to train them in broader principles of evidence-based practice that will
enable them to easily adapt to novel demands for new competencies after attaining their
PhD (educating for capability rather than competency; Fraser & Greenhalgh, 2001).
There is evidence that clinical psychology training directors are ready for this devel-
opment. In the Woody et al. (2005) survey, some clinical training directors indicated that
current practice reflects an underemphasis on broad principles of evidence-based practice
in favor of learning particular procedures on a treatment-by-treatment basis. Some of the
issues related to the ability of programs to provide appropriate training would be addressed
if we adopted a more general principles approach. Although not particularly on point in
the context of this article, it is my view that developing competencies in ESTs for research
and professional practice is the joint and cumulative responsibility of doctoral programs,
internships, and postdoctoral programs that work together to provide a continuum of
training in knowledge and skills in EBPP.
Training in EBPP will require graduate training programs to include new content in
research training curricula so that students are ready to understand and apply basic prin-
ciples of EBPP in their everyday professional lives. Primary needs include training in (a)
epidemiology, (b) clinical trials methodology, (c) qualitative research methods and mea-
surement, (d) how to conduct and appraise systematic reviews and meta-analyses, and (e)
in building skills in informatics and electronic database searching necessary to find best
available evidence relevant to the problems that students will encounter in their research
and clinical work. Such content could be introduced in a basic research methods course,
could be taught separately in a course on EBPP, or could be infused in the curriculum
through a combination of didactic, practicum, and research experiences (for additional
ideas on infusion of EBPP into the curriculum, see Dillillo & McChargue, this issue).
Achieving true infusion and integration will require that all program faculty is committed
to the concept of EBPP, that all will have received some basic education in EBPP them-
selves, and that EBPP concepts are represented throughout the curriculum. The faculty
development implications of advancing EBPP are not trivial. In the short run, an effective
strategy may be to partner with colleagues in medicine, health professions, nursing,
and public health to provide interdisciplinary instruction and mentoring in basic princi-
ples of EBP.
Epidemiology
Many problems important to psychologists (e.g., whether a clinical assessment tool is
effective in identifying at-risk patients, whether a treatment protocol is effective in reduc-
ing psychological distress or disability in a defined population) can be conceptualized
and described in epidemiological terms. For example, the strength of a treatment effect
can be described with reference to the concept of “number needed to treat” (the number
688 Journal of Clinical Psychology, July 2007
Journal of Clinical Psychology DOI 10.1002/jclp
of patients who would need to be treated to produce one additional favorable outcome),
or “number needed to harm” (the number of patients who would need to be treated to
prevent one additional unfavorable outcome), or, more generally, in terms of relative or
absolute risk reduction. Knowledge of basic aspects of diagnostic test performance (e.g.,
sensitivity, specificity, positive and negative predictive value) so critical to psychological
practice can also be enhanced by forging links between these concepts and corresponding
concepts in epidemiology (e.g., positive and negative likelihood ratios). A broad ground-
ing in epidemiological methods will promote further ways of understanding and inferring
causality from observational and experimental data, will further an appreciation for pre-
ventative methods, and will provide much-needed appreciation for community- and
population-based methods that will complement psychology’s traditional emphasis on
individuals and small groups.
Clinical Trials Methodology
Although many graduate statistics and methodology courses cover such topics as case-
control designs, cohort designs, and elements of randomized clinical trials (RCTs), clas-
sical methodology education in the Campbell and Stanley (1963) tradition needs to be
supplemented with contemporary information relevant to clinical trials methodology. For
example, training in standards for designing, conducting, and reporting clinical trials
consistent with the CONSORT statement (Begg et al., 1996; Moher, Schulz, & Altman,
2001) is important so that reports of psychological clinical trials have appropriate con-
sistency and transparency. Training in methods for reporting the size of treatment effects
(going beyond statistical significance), allocating samples, specifying outcomes (relative
and absolute risk reduction, number needed to treat and number needed to harm), and
addressing the ethical issues of clinical trials are all critically needed if psychology is to
develop a truly evidence-based practice. Building the ability to evaluate the results of
extant trials critically is also crucial if psychological practitioners are to meaningfully
apply the best evidence standard to their own clinical work and research.
Qualitative Research Methods and Measurement
Clinical psychologists trained in the scientist–practitioner tradition are almost exclu-
sively focused on quantitative research methods, with an attendant emphasis on measure-
ment precision, quantitative statistical analysis, and tightly controlled experimental design.
This scientific tradition links us with our colleagues in the natural and social sciences,
and represents our preferred “way of knowing” the world. In contrast, qualitative approaches
to research seek to evaluate the quality, or essence of human experience using a funda-
mentally different methodological and analytic framework (Mays & Pope, 1995, 2000;
Pope, Ziebland, & Mays, 2000). Many psychologists are familiar with at least some
qualitative research methods exemplified, for example, in ethnography, sociometry,
participant-observation, or content analysis of discourse. However, methods such as con-
vergent interviewing, focus groups, and personal histories are generally foreign to most
students in scientist–practitioner programs. As applied to health care, qualitative research-
ers may seek to evaluate the experiences of brain-injured patients in rehabilitative set-
tings as a way of enhancing the design of the rehabilitation environment for purposes of
maximizing recovery. They may investigate case dispositions in a child neurosurgery
clinic by evaluating commonalities among physicians’ notes and clinical decisions. They
may evaluate treatment acceptability by interviewing patients about their experiences in
Evidence-Based Practice and Research 689
Journal of Clinical Psychology DOI 10.1002/jclp
treatment. It is important for psychologists to become more familiar with these methods
because many systematic reviews in the EBP literature contain the results of qualitative
studies (Thomas et al., 2004). Although qualitative research is generally incapable of
establishing causative relationships among variables, they may be the only (and therefore
the best) source of evidence for rare conditions and they may suggest associations worthy
of future research. Reviews of this area as applied to healthcare can be found in Green-
halgh & Taylor (1997), Grypdonck (2006), Holloway (1997), and Leininger (1994).
Conducting Systematic Reviews and Meta-Analyses
The explosion of relevant medical and psychological literature has made it difficult for
scientist–practitioners to have access to the best evidence at the single-study level while
attending to multiple simultaneous demands for their time. For this reason, systematic
reviews of the literature are becoming increasingly important as sources for state of the
art information. Most graduate courses in research methodology and statistics devote
little attention to conducting reviews or meta-analyses, although many programs now
appear to be offering grant-writing courses or seminars. In these courses, an emphasis on
design and critique of individual studies is commonplace, whereas development of skills
in evaluating systematic reviews or meta-analyses is rare. If psychology is to become a
key player in evidence-based-practice, the next cadre of scientist–practitioners will have
to develop skills in conducting and evaluating these kinds of reviews. In programming
needed education and training, it is important to distinguish between narrative reviews
(the kind of review that is seen, for example, in Psychological Bulletin) and systematic
reviews. Narrative reviews are conducted by knowledgeable persons who often conduct
the review for advancing a particular theoretical conclusion. They therefore yield poten-
tially biased conclusions because there is no consensually agreed-upon method for com-
bining and weighting results from different studies. In contrast, systematic reviews and
meta-analyses proceed according to specified methodological conventions in which the
search method, the procedure for including and excluding studies, and the method for
eventually calculating effect sizes or odds ratios are specified beforehand (e.g., fixed
effects vs. random effects models), as are methods for determining statistical and clinical
significance (Cook, Mulrow, & Haynes, 1997; Cook, Sackett & Spitzer, 1995; Quintana
& Minami, 2006). Meta-analysis is a specific form of quantitative systematic review that
aggregates the results of similar studies for purposes of generating more stable conclu-
sions from pooled data than is possible at the individual-study level (Egger, Smith, &
Phillips, 1997; Rosenthal & DiMatteo, 2001; Wolf, 1986). Recent techniques allow for
the calculation of bias in published studies, allowing the reader to appraise whether the
results of the analysis reflects an undistorted view of effect size (Stern, Egger, & Smith,
2001). Clinical psychologists need to know these basic concepts so that they can evaluate
the relevance and quality of available evidence.
Informatics and Database Searching Skills
If a tree falls in the woods, and there is no one there to hear it, does it make a sound? This
classical conundrum about the nature of reality seems relevant to the key issue of infor-
mation access in evidence-based practice. If useful information about best evidence exists,
but we do not or cannot access it, it cannot be brought to bear on clinical decision making
(Slawson & Shaughnessy, 2005). For this reason, developing expertise in informatics and
database searching is a critical step in making EBPP a reality. In my experience, most
690 Journal of Clinical Psychology, July 2007
Journal of Clinical Psychology DOI 10.1002/jclp
psychologists, and students of psychology, search a limited number of databases (PubMed,
U.S. National Library of Medicine, 1971; PsychLit, APA, 1967) with a single search
term, and (at most) a single Boolean operator. It is not uncommon for a supervisor of a
fledgling student to hear that, “there’s nothing in the literature” about a topic the student
is interested in researching. Most use very little of what is available, and many are com-
pletely unaware of many of the most important and useful resources available for EBPP.
A detailed discussion of these resources is beyond my scope (see Hunt & McKibbon,
1997); nevertheless, it seems critical that some effort be devoted (either in faculty devel-
opment seminars or in graduate education) to addressing database availability explicitly,
including access strategies, search methodology, and approaches to information manage-
ment (managing search results). A key first step in getting this accomplished may be to
establish a close relationship with a librarian or library informatics specialist who can
help translate educational and research needs into strategies for accessing needed infor-
mation, and who can provide access to needed databases and other resources. It is not
uncommon, particularly in larger institutions, for at least one member of the library staff
to be particularly skilled in evidence-based medicine. There are a number of databases
that are of particular relevance to EBPP, including CINAHL (nursing and allied health;
Cinahl Information Systems, 1984), EMBASE (1974), The Cochrane Library (including
the Cochrane Database of Systematic Reviews [CDSR]; Cochrane Library, 1999a), the
Database of Abstracts of Reviews of Effects (DARE; Cochrane Library, 1999b), the
Cochrane Central Register of Controlled Trials (CENTRAL; Cochrane Library, 1999c),
and the ACP Journal Club (American College of Physicians, 1994), available on the Ovid
(Ovid Technologies, New York, NY) search engine (for a more in-depth discussion, see
Walker & London, this issue).
Obtaining access to these databases is only part of the story; the development of
strategic searching skills designed to yield a manageable number of relevant search results
is a key outcome goal of educational efforts that will be achieved only through actual
practice in problem-based learning situations. Finally, development of a local or profession-
wide resource that contains the answers to evidence-based queries (so-called, critically
appraised topics or CATS) will enable students and their mentors to benefit from the
evidence-based practice efforts of their colleagues. Other authors in this series have
suggested ways of incorporating skill-building activities into practicum and other parts
of the psychology curriculum (see Collins, Belar, & Leffingwell, this issue; DiLillo &
McChargue, this issue).
The Way Forward
In this article, I have tried to highlight ways that the interdisciplinary trend toward evidence-
based practice offers real opportunities to address some difficult research problems and
to revitalize certain aspects of our graduate curricula. This brief analysis has likely raised
more questions (e.g., How? When? By whom? In what way?) as far as the training impli-
cations are concerned, and has not dealt at all with criticisms that have been thoughtfully
levied against the EBP approach to research and research training. One key issue in
advancing EBP within psychology will be to pay attention to the key stage of the process
by which knowledge (best evidence) is transformed into action and application. This, in
my view, is the state of the process that is least understood from a psychological view-
point. What are the principles by which best evidence can be modified to fit the individ-
ual case? What evidence is “good enough” to drive a clinical decision? What about those
aspects of psychological health care (e.g., relationship, trust, identification, and model-
ing) that are implicitly important in the delivery of services, but that don’t themselves
Evidence-Based Practice and Research 691
Journal of Clinical Psychology DOI 10.1002/jclp
have large-scale independent empirical support? These (and others) are key questions we
will need to grapple with as we implement an evidence base for clinical psychology and
teach students how to access and use it.
With regard to pedagogy, I am convinced that the only way to go is to incorporate
problem-based, real-time experiences throughout the curriculum in which students can
learn to walk the EBPP walk. This is a significant undertaking with profound implica-
tions as far as faculty development is concerned. I am as skeptical of an Evidence-Based
Practice Course as a way to develop the needed skills and capacities of our students as I
am that a Cultural Diversity Course will somehow help build multicultural competencies.
We will need to figure out how to incorporate the content, the concepts, and the tech-
niques of evidence-based psychological practice at all levels of research and clinical
training if we are to be truly successful in assimilating the EBPP way of thinking. We
cannot do it all; faculty are generally not up to speed with all that is needed, and, for the
practicing clinician, health care events proceed at a rapid pace. We can begin the process
by equipping tomorrow’s psychological practitioners with the tools necessary to imple-
ment EBPP into their everyday clinical practice. In addition, we can capitalize on the
obvious opportunities to expand our multidisciplinary interdependence on other health
professionals in nursing, medicine, pharmacy, and public health who are further down the
EBP road than we are. Providing faculty with needed support, and developing methods
for educating and training tomorrow’s psychologists in EBPP is critically needed in estab-
lishing an evidence base equal to the task of providing quality psychological health care
for those that depend on us.
References
American College of Physicians. (1994). ACP Journal Club homepage. Retrieved February 15,
2007, from http://www.acpjc.org
American Psychological Association. (1967). PsycINFO homepage. Retrieved February 15, 2007,
from http://www.apa.org/psycinfo/products/psycinfo.html
Begg, C., Cho, M., Eastwood, S., Horton, R., Moher, D., Olkin, I., et al. (1996). Improving the
quality of reporting of randomized controlled trials: the CONSORT statement. Journal of the
American Medical Association, 276, 637– 639.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research.
Chicago: Rand McNally College Publishing.
Chambless D. L. (1995). Training and dissemination of empirically validated psychological treat-
ments: Report and recommendations. The Clinical Psychologist, 48, 3–23.
Chambless, D. L., Baker, M. J., Baucom, D. H., et al. (1998). Update on empirically validated
therapies, II. The Clinical Psychologist, 51, 3–16.
Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions:
controversies and evidence. Annual Review of Psychology, 52, 685–716.
Chelune, G. (1995). Hippocampal adequacy versus functional reserve: Predicting memory func-
tions following temporal lobectomy. Archives of Clinical Neuropsychology, 10, 413– 432.
Cinahl Information Systems. (1984). Homepage. Retrieved February 15, 2007, from http://
www.cinahl.com
Cochrane Library. (1999a). Homepage. Retrieved February 15, 2007, from http://www3.
interscience.wiley.com/cgi-bin/mrwhome/106568753/HOME
Cochrane Library. (1999b). DARE Homepage. Retrieved February 15, 2007, from http://www.
mrw.interscience.wiley.com/cochrane/cochrane_cldare_articles_fs.html
Cochrane Library. (1999c). Cochrane Central Register of Controlled Trials Homepage. Retrieved
February 15, 2007, from http://www.mrw.interscience.wiley.com /cochrane/cochrane_
clcentral_articles_fs.html
692 Journal of Clinical Psychology, July 2007
Journal of Clinical Psychology DOI 10.1002/jclp
Collins, F. L., Leffingwell, T.R., & Belar, C. D. (2007). Teaching evidence-based practice: Impli-
cations for psychology. Journal of Clinical Psychology, 63, 657– 670.
Cook, D. J., Mulrow, C. D., & Haynes, R. B. (1997). Systematic reviews: Synthesis of best evi-
dence for clinical decisions. Annals of Internal Medicine, 126, 376–380.
Cook, D. J., Sackett, D. L., Spitzer, W. O. (1995). Methodologic guidelines for systematic reviews
of randomized control trials in health care from the Potsdam Consultation on Meta-Analysis.
Journal of Clinical Epidemiology, 48, 167–171.
Davidson, K. W., & Spring, B. (2006). Developing an evidence base in clinical psychology. Journal
of Clinical Psychology, 62, 259–271.
DiLillo, D., & McChargue, D. (2007). Implementing evidence-based practice training in a scientist–
practitioner program. Journal of Clinical Psychology, 63, 671– 684.
Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: Principles and procedures. British
Medical Journal, 315, 1533–1537.
EMBASE. (1974). Homepage. Retrieved February 15, 2007, from http://www.embase.com
Fraser, S.W., & Greenhalgh, T. (2001). Coping with complexity: Educating for capability. British
Medical Journal, 323, 799–803.
Greenhalgh, T., & Taylor, R. (1997). How to read a paper: Papers that go beyond numbers (quali-
tative research). British Medical Journal, 315, 740–743.
Grypdonck, M. H. (2006). Qualitative health research in the era of evidence-based practice. Qual-
itative Health Research, 16, 1371–1385.
Holloway I. (1997). Basic concepts for qualitative research. Oxford: Blackwell Science.
Hunt, D. L., & McKibbon, K. A. (1997). Locating and appraising systematic reviews. Annals of
Internal Medicine, 126, 532–538.
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century.
Washington, DC: National Academies Press.
Leininger, M. (1994). Evaluation criteria and critique of qualitative research studies. In J. M. Morse
(Ed.), Critical issues in qualitative research methods. Thousand Oaks, CA: Sage.
Mayer, D. (2004). Essential evidence-based medicine. New York: Cambridge University Press.
Mays, N., & Pope, C. (1995). Reaching the parts other methods cannot reach: An introduction
to qualitative methods in health and health services research. British Medical Journal, 311,
42– 45.
Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. British Medical Journal,
320, 50–52.
McCabe, O. L. (2006). Evidence-based practice in mental health: accessing, appraising, and adopt-
ing research data. International Journal of Mental Health, 35, 50– 69.
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: Revised recommen-
dations for improving the quality of reports of parallel-group randomized trials. Lancet, 357,
1191–1194.
Paul, G. L. (1967). Outcome research in psychotherapy. Journal of Consulting Psychology, 31,
109–118.
Paul, G. L. (1969). Behavior modification research: Design and tactics. In C. M. Franks (Ed.),
Behavior therapy: Appraisal and status (pp. 29– 62). New York: McGraw-Hill.
Pope, C., Ziebland, S., & Mays, N. (2000). Qualitative research in health care: Analyzing qualita-
tive data. British Medical Journal, 320, 114–116.
Quintana, S. M., & Minami, T. (2006). Guidelines for meta-analyses of counseling psychology
research. The Counseling Psychologist, 34, 839–877.
Rosenthal, R., & DiMatteo, M. R. (2001). Meta-analysis: Recent developments in quantitative
methods for literature reviews. Annual Review of Psychology, 52, 59–82.
Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996).
Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312, 71–72.
Evidence-Based Practice and Research 693
Journal of Clinical Psychology DOI 10.1002/jclp
Slawson, D. C., & Shaughnessy, A. F. (2005). Teaching evidence based medicine: should we be
teaching information management instead? Academic Medicine, 80, 685– 689.
Stern, J. A. C., Egger, M., & Smith, G. D. (2001). Investigating and dealing with publication and
other biases in meta-analysis. British Medical Journal, 323, 101–105.
Straus, S. E., Richardson, W. S., Glasziou, P., & Haynes, R. B. (2005). Evidence based medicine:
How to practice and teach EBM. Edinburgh: Elsevier/Churchill Livingstone.
Thomas, J., Harden, A., Oakley, A., Oliver, S., Sutcliffe, K., Rees, R., et al. (2004). Integrating
qualitative research with trials in systematic reviews. British Medical Journal, 328, 1010–1012.
U.S. National Library of Medicine. (1971). Medline/PubMed homepage. Retrieved February 15,
2007, from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi
Walker, B. W., & London, S. (2007). Novel tools and resources for evidence-based practice in
psychology. Journal of Clinical Psychology, 63, 633– 642.
Wolf, F. M. (1986). Meta-analysis: Quantitative methods for research synthesis. Beverly Hills, CA:
Sage.
Woody, S. R., Weisz, J., & McLean, C. (2005). Empirically supported treatments: 10 years later.
The Clinical Psychologist, 58, 5–11.
694 Journal of Clinical Psychology, July 2007
Journal of Clinical Psychology DOI 10.1002/jclp
from the editors
Confusion abounds about what qualifies as “evidence” of effective interventions. The
president of the American Psychology Associa-
tion [APA] notes that “much of the research that
guides evidence-based practice is too inacces-
sible, overwhelming, and removed from practice”
(Goodheart, 2010, p. 9). Yet lists of evidence-based
treatments are being used to control funding in
treatment, human services, and education. Stated
simply, such policies are
based on shaky science.
Certainly there is no short-
age of evidence that some
methods are destructive,
like withholding treatment
or placing traumatized kids
in toxic environments. But
a wide variety of therapeu-
tic interventions can have
a positive impact if con-
ducted within a trusting
alliance.
There are two very differ-
ent views of what evidence
is most important. Re-
search in the traditional
medical model compares a
proposed treatment with
alternates or a placebo. If a
prescribed number of pub-
lished studies give a statis-
tical edge, the treatment is
anointed as “evidence-based.” This is followed
by endorsements from the National Institute of
Health, the Department of Education, or other
authoritative bodies.
Providing lists of curative treatments may work for
medicine, but this is not how to find what works in
complex therapeutic relationships. Mental health
research has shown that the process of enshrining
Practice-Based Evidence:
Back to the Future
Larry K. Brendtro, Martin L. Mitchell, & James Doncaster
Researchers are shifting from the medical model of studying treatments, to a practice-
based model focusing on the nature and needs of a person in a therapeutic relationship.
As seen from the articles in this special issue, this has been a central tenet of Re-ED
since founded by Nicholas Hobbs fifty years ago.
James Doncaster, guest editor
winter 2011 volume 19, number 4 | 5
specific treatment models as evidence-based is based
on flawed science (Chan, Hróbjartsson, Haahr,
Gøtzsche, & Altman, 2004). Dennis Gorman
(2008) of Texas A & M University documents simi-
lar problems with school-based substance abuse
and violence prevention research which he calls
scientific nonsense.
Julia Littell (2010) of the Campbell Coalition
documents dozens of ways that sloppy science is
being used to elevate specific treatments to evi-
dence based status. Here are just a few of these
research flaws:
Allegiance Effect:
Studies produced by advocates of a particular
method are positively biased.
File Cabinet Effect:
Studies showing failure or no effects are tucked
away and not submitted for publication.
Pollyanna Publishing Effect:
Professional journals are much more likely to publish
studies that show positive effects and reject those that
do not.
Replication by Repetition Effect:
Reviewers rely heavily on recycling findings cited
by others, confusing rumor and repetition with
replication.
Silence the Messenger Effect:
Those who raise questions about the scientific base
of studies are met with hostility and ad hominem
attacks.
When researchers account for such biases, a clear
pattern emerges. Widely touted evidence-based treat-
ments turn out to be no better or no worse than other ap-
proaches. Solid science speaks—success does not lie
in the specific method but in common factors, the
most important being the helping relationship.
Re-ED uses human relationships
to change the world
one child at a time.
Our field is in ferment as the focus of research is
shifting. Instead of the study of treatments, the
child now takes center stage. The practice-based
model focuses on the nature and needs of an indi-
vidual in an ecology (Brendtro & Mitchell, 2010).
Effective interventions use research and practice
expertise to target client characteristics including
problems, strengths, culture, and motivation
(APA, 2006). Research and evaluation measure
progress and provide feedback on the quality of the
therapeutic alliance (Duncan, Miller, Wampold, &
Hubble, 2010).
Instead of the study of
treatments, the child now
takes center stage.
Re-ED is rooted in practice-based evidence. It taps
a rich tradition of research, provides tools for di-
rect work with youth, and tailors interventions to
the individual child in an ecosystem (Cantrell &
Cantrell, 2007; Freado, 2010). Fifty years after they
were developed by Nicholas Hobbs and colleagues,
the Re-ED principles offer a still-current map for
meeting modern challenges. Re-ED does not im-
pose a narrowly prescribed regimen of treatment,
but uses human relationships to change the world
one child at a time.
Larry K. Brendtro, PhD, is Dean of the Starr In-
stitute for Training and co-editor of this journal with
Martin L. Mitchell, EdD, President and CEO of
Starr Commonwealth, Albion, Michigan. They can be
contacted via email at courage@reclaiming.com
James Doncaster, MA, is the senior director of orga-
nizational development at Pressley Ridge in Pittsburgh,
Pennsylvania, and is guest editor of this special issue
on the fiftieth anniversary of the founding of Re-ED. He
may be contacted at jdoncaster@pressleyridge.org
6 | reclaiming children and youth www.reclaimingjournal.com
References
APA Presidential Task Force on Evidence-Based Practice.
(2006). Evidence-based practice in psychology. Ameri-
can Psychologist, 61(4), 271-285.
Brendtro, L., & Mitchell, M. (2010). Weighing the evidence:
From chaos to consilience. Reclaiming Children and
Youth, 19(2), 3-9.
Cantrell, R., & Cantrell, M. (2007). Helping troubled children
and youth. Memphis, TN: American Re-Education As-
sociation.
Chan, A., Hróbjartsson, A., Haahr, M., Gøtzsche, P., & Alt-
man, D. (2004). Empirical evidence for selective report-
ing of outcomes in randomized trials: Comparison of
protocols to published articles. JAMA, 291, 2457-2465.
Duncan, B., Miller, S., Wampold, B., & Hubble, M, (Eds.).
(2010). The heart and soul of change, second edition: Deliv-
ering what works in therapy. Washington, DC: American
Psychological Association.
Freado, M. (2010). Measuring the impact of Re-ED. Reclaim-
ing Children and Youth, 19(2), 28-31.
Goodheart, C. (2010). The education you need to know.
Monitor on Psychology, 41(7), 9.
Gorman, D. (2008), Science, pseudoscience, and the need
for practical knowledge. Addiction, 103, 1752–1753.
Littell, J. (2010). Evidence-based practice: Evidence or ortho-
doxy. In B. Duncan, S. Miller, B. Wampold, & M. Hubble
(Eds.), The heart and soul of change, second edition: Deliv-
ering what works in therapy. Washington, DC: American
Psychological Association.
PrinciPles oF re-ed
Trust between a child and adult is essential, the foundation on which all other principles rest.
Life is to be lived now, not in the past, and lived in the future only as a present challenge.
Competence makes a difference, and children should be good at something, especially at school.
Time is an ally, working on the side of growth in a period of development.
Self-control can be taught and children and adolescents helped to manage their behavior.
Intelligence can be taught to cope with challenges of family, school and community.
Feelings should be nurtured, controlled when necessary, explored with trusted others.
The group is very important to young people, and it can be a major source of instruction in growing up.
Ceremony and ritual give order, stability, and confidence to troubled children and adolescence.
The body is the armature of the self, around which the psychological self is constructed.
Communities are important so youth can participate and learn to serve.
A child should know some joy in each day.
Hobbs, N. (1982). The troubled and troubling child. San Francisco, CA: Jossey-Bass.
winter 2011 volume 19, number 4 | 7
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Psychological Treatments: Putting Evidence Into Practice and Practice Into Evidence
Dozois, David J A
Canadian Psychology; Feb 2013; 54, 1; ProQuest Central
pg. 1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.