Overview of Evidence Based Practices in Psychology and Implementation Science

Fig. 1.1
Evidence based practice in psychology framework (APA 2006)

Development of an EBP

The evidence ladder
Intervention science activity

1. Reliable intervention

Post recognition quality monitoring

2. Disseminable

Disseminability studies

3. Effective

Multiple & multi-site replication studies

4. Conditionally effective

Initial evaluation studies

5. Emerging

Pilot studies: manuals, fidelity & outcome measures

6. Program of interest

Discovering & describing interesting Programs: basic research, clinical judgment

Ladder of Evidence. Several intervention development/implementation models informed the development of the COMPASS model. The Ladder of Evidence Model (Leff et al. 2003) provides the largest context and is a good overview of our developmental process (see Fig. 1.2). As shown, the development and eventual dissemination of an EBP progresses through a series of six hierarchical steps. At the first step, the developers discover a promising new approach for some clinical disorder or problem. At this stage, case studies, clinical experience and program evaluation all help to provide the developers with the initial set of ingredients and critical elements that comprise the first iteration of the intervention.


Fig. 1.2
Ladders of evidence

This first step for COMPASS is described in more detail in Chap. 2. The next step comprises the pilot studies, where the initial iteration is first formally tested as a complete package. During this step manuals and fidelity scales begin to be developed. The third step concerns the initial evaluation studies, usually with RCT designs, in which the intervention is first shown to be effective in a rigorous clinical trial. At this point, the intervention is considered to be an emerging or promising practice. The COMPASS studies comprising the second and third steps are described in more detail in Chaps. 4 and 5. The fourth step concerns further effectiveness studies that are larger and multi-site. We are beginning to do these studies for COMPASS. The fifth (dissemination) and sixth stages (reliable intervention) comprise what is often referred to as implementation science. Once an EBP has been identified, there is still a need to insure that it is disseminated and implemented accurately. This requires the development of training protocols, and a suite of fidelity and outcome measures to guide and track faithful implementation. We are also vigorously pursuing these aims, and discuss our progress in this regard in Chaps. 5 and 6. Overall, the Ladder of Evidence Model provides a good overview of our process. However, as discussed in the next section we also supplemented this model with additional frameworks.

Dunst and Trivette Framework. Two further frameworks helped to guide our research program. Both build on the Ladder of Evidence and provide further explication of particular steps. The first by Dunst and Trivette (2012) expands on steps five and six of the Ladder of Evidence. In this framework they make a helpful differentiation between implementation strategies and the intervention strategy. As originally envisioned by Dunst and Trivette, implementation strategies represent those practices used to support the accurate implementation of the intervention (e.g., training, fidelity monitoring, outcomes monitoring, etc.). That is, the implementation does not intervene directly with the intended clients or students, but refers to those strategies that support the intervention implementation, and thus any impact on client or students outcomes is indirect. This is a very helpful framework for understanding a consultation model, such as COMPASS. In this framework, the implementation practice refers to the methods used by consultants, coaches, and trainers to teach the intervention practice or EBP to the teacher, clinician, parent or service provider that will result in improved child or client outcomes. That is, the implementation practice is what the consultant does with the teacher and the intervention practice is what the teacher does with the child. In our work, COMPASS has served as the evidence-based implementation strategy proven to result in better educational outcomes for children with ASD. The link between COMPASS (what the consultant does with the teacher) and child outcomes (what the teacher does with the child) is the intervention practice or EBP. Each of the three areas in Fig. 1.3 represents interdependent activities that are both distinct and also linked to each other. In other words, the quality of the implementation practice (COMPASS consultant fidelity) should be associated with the quality of the intervention practice (teacher fidelity), which subsequently is associated with the effectiveness of the practice outcomes (child educational goal attainment). In later chapters, we will present data that show the relationship between these three areas.


Fig. 1.3
Implementation science framework (Dunst et al. 2013)

Integrated Model. Our integrated model includes both the features of EBPP and the Dunst and Trivette (2012) framework, while also aligning with steps two through six of the Ladder of Evidence (see Fig. 1.4). The EBPP factors are represented by the internal and external factors described under consultant, teacher, and child behavior and in Chaps. 7 and 8. The Dunst and Trivette framework is represented by the hashed lines and includes the quality elements associated with the implementation and intervention practice variables. As shown, there are three primary players (represented by the three central blocks) that impact COMPASS outcomes, the consultant, the teacher and the student with ASD. The outputs of each central block are the specific behaviors of the consultant (e.g., providing feedback/education, providing support), the teacher (e.g., engaging the child directly, providing prompts) and the student with ASD (e.g., engaged with the teacher, compliance with directions, off-task behavior).


Fig. 1.4
Integrated model

Factors that can impact the outputs or behaviors of each actor are modeled as internal and external factors. These factors serve either to support or hinder the individual in performing their specific tasks within COMPASS. Moreover, external and internal factors can refer either to general factors or those specific to COMPASS. For example, for the consultant, external factors include training in consultation practices generally, training in COMPASS specifically, and support from other consultants or administration. Internal factors could include general skills and knowledge (listening skills, observational or assessment skills, knowledge of autism) and skills specific to COMPASS (ability to create good goals, knowledge of COMPASS model), as well as personal factors (sense of well-being, burnout, personality—outgoing vs. introverted). Similarly for teachers, external factors include training (both general training in special education and specific to COMPASS) and support (other teacher support, general support from family and friends, administrative support, consultant support, workplace supports—time, equipment), and internal factors could include skills/knowledge, again both general and specific to COMPASS (knowledge of autism, skills in data collection, knowledge of COMPASS model) as well as personal factors (burnout, stress, optimism). For students, external factors include supports (teacher, parents, other students or professionals) and training (teacher instruction and feedback) and internal factors include knowledge/skills (good attentional ability, educational attainment, language skills) and personal factors (autism severity, intellectual disability). It should be noted that the initial COMPASS consultation provides a thorough assessment of the internal and external factors impacting the student.

Only gold members can continue reading. Log In or Register to continue

Jun 29, 2017 | Posted by in PSYCHOLOGY | Comments Off on Overview of Evidence Based Practices in Psychology and Implementation Science
Premium Wordpress Themes by UFO Themes