COMPASS Intervention Quality and Active Ingredients

Fig. 6.1
Integrated model with focus on intervention practice quality

Recall in the previous chapter our description of two types of quality measures—structural and procedural. We examined both structural and process fidelity in both RCTs. Table 6.1 describes our measures of intervention quality, the criterion or purpose of the measure, the specific construct represented by the measure, and the description of the measure. With respect to structural fidelity, we first applied a simple measure of structural fidelity and tested whether it was stable, able to be measured reliably, and sensitive to detecting change associated with child educational outcomes. We also applied a second structural fidelity measure of IEP quality. Specifically, we examined how well teachers followed through and updated their student’s IEP following COMPASS. We developed an IEP quality measure and assessed this change pre and post-COMPASS and compared to the control groups. These two measures will be described in detail as we discuss mechanisms of change or explanations for the success of COMPASS.

Table 6.1
Intervention fidelity measures

Type of fidelity






Teacher adherence

The percentage of elements implemented from the intervention plans


Program differentiation

Targeted IEP quality

Areas of IEP improvement as a function of COMPASS


Participant responsiveness

Teacher engagement

Quality of teacher engagement during instruction with child


Participant responsiveness

Student engagement

Quality of student engagement during teacher instruction


Quality of delivery

Common elements of teaching sequence

Quality of implementation of the intervention plan

For procedural fidelity, we applied three measures that assessed the quality of the intervention as delivered by the teacher. Two tapped into the quality of the teacher’s instruction while with the student, and a third tapped into the quality of engagement of the student during interaction with the teacher. This last measure was developed post hoc, after our RCTs. We explain the rationale for this measure using a common elements approach later in the chapter. We will begin with a discussion of our measures of structural fidelity. As we will show, based on our findings, both measures index critical aspects of the COMPASS model, that is, they are mechanisms of action.

Mechanisms of Action

Mechanisms of action typically are thought to reflect the underlying theory of change or active ingredients that explain why a treatment works. When we consider the model of psychotherapy, for example, in psychoanalytic theory, a key mechanism of change is catharsis or the process of releasing emotions. Thus one would expect that a person experiencing successful change would also undergo catharsis. Similarly, according to Beck’s cognitive theory of depression (Beck 1995), the cognitive triad (negative view of self, other, and future) is thought to underlie depression. Thus, therapeutic change should result from and produce change in proportion to the degree to which an individual identifies with the three elements of the cognitive triad.

When asked to generate reasons why COMPASS works, we had to think carefully about the goals of COMPASS and what outcomes we expected to observe following the different activities. With respect to an intervention such as COMPASS, the articulation of mechanisms is complicated by the fact that COMPASS is an implementation strategy that is intended to alter an intervention strategy through teacher behavior change, that then should impact a child outcome. Thus, we had to think about mechanisms at two different levels—one associated with the consultant and the other associated with the teacher.

In our paper Mechanisms of Change in COMPASS Consultation for Students with Autism (Ruble et al. 2013), we examined the underlying factors that help explain why COMPASS works. We wanted to know what implementation variables, intervention variables (teacher variables), and child variables impacted outcomes. We were guided by both the National Research Council (2001) recommendations for effective programs, the Individuals with Disability Education Act (IDEA 2004) mandates for research supported educational intervention, and also by the frameworks described in Chap. 1. In the next section, we describe those critical elements that have been identified in our two RCTs and the hypothesized elements that need to be evaluated more thoroughly.

Recall that the two steps of COMPASS included (a) the initial, parent-teacher goal setting and treatment planning session and (b) the follow-up teacher coaching and performance-based assessment activities. At the level of implementation strategy, we had several potential elements that we thought would be important for creating change in the teacher and the child, such as how well the consultant implemented the consultation and the coaching sessions (reviewed in Chap. 5). At the level of intervention practice, what the teacher did as a result of the consultation had to be considered. At this level, we hypothesized that at least two teacher variables would be critical for positive child goal attainment outcomes—IEP Quality, an expected outcome of the initial consultation, and implementation fidelity of teaching plans, an expected outcome of the coaching sessions. Figure 6.2 using the Dunst et al. framework (2013) specifies the two active ingredients as part of the intervention practice that we tested. This framework is familiar because we presented it in Chap. 1 and it is embedded in our Integrated Model.


Fig. 6.2
Dunst et al. (2013) framework for understanding active ingredients

Intervention practice: IEP quality. COMPASS explicitly incorporates the concept of social validity. Social validity refers to the “accurate and representative sample of the consumers’ opinions” which results in information that is “used to sustain satisfactory practices or effect changes in the program to enhance its viability” (Schwartz and Baer 1991). That first contact with the parent and teacher is critical for creating a process for a shared understanding of the child from both the parent and teacher viewpoints. We believe clear understanding of the entire set of challenges and supports facing the child are critical for sound goal setting and strategy selection and that teachers with this level of understanding will do a better job and will have more confidence in their choices; this is one reason why teachers follow through with the plans that result from COMPASS.

We also believe that the discussions of the COMPASS Profile (e.g., see Fig. 5.​2), which summarize and assess the challenges and resources impacting the child and family, results in better and appropriate IEP goals for that specific child. After all, we know that children with ASD need targeted instruction in social communication skills and self-direction, although what specific goals for the areas are not readily apparent. Two children may both share a need for social skills instruction, however, one may need instruction on having peer-appropriate conversation while another child may need instruction on how to share toys back and forth. For each child, the specific goals and teaching methods are unique. The information needed to identify the unique challenges for each child comes from the discussion of the COMPASS Profile assessment forms which helps create a picture of the whole child, at home, at school, and in the community. This complete representation helps specify the individualized goal, as well as personal and environmental challenges and supports necessary for understanding and developing the intervention plan.

Thus, one expectation for completing step one of COMPASS, the initial consultation, was ecologically valid, personalized goals, and that these goals would be reflected in a better IEP. Specifically, we expected that the IEPs would have teaching goals that were well written and personalized to the child and reflected the needs of students with ASD based on both NRC (2001) best practices for educating students with autism and IDEA (2004) federal mandates for special education programs. Both guidelines are incorporated into COMPASS. The elements that came from these two sources resulted in an IEP evaluation tool (Ruble et al. 2010) that we used to test our prediction of better quality IEPs. The IDEA elements that were incorporated into our tool were based on the quality of the IEP goals. Well written goals have at least three features: (a) they are measurable; (b) they are observable; and (c) they have a criterion or expected attainment level described. The NRC elements of quality that were incorporated focused on the nature of the goals and their sensitivity to the needs of students with ASD. An NRC informed IEP contains goals in key areas identified as critical for ASD including: social goals, communication goals, and goals that reflect skills necessary for independent or self-directed learning skills. Table 6.2 shows examples of our evaluation tool that was used for quality determination.

Table 6.2
NRC and IDEA quality indicators

NRC indicators

1. Includes goals/objectives for social skills to improve involvement in school and family activities

2. Includes goals/objectives for expressive, receptive, and non-verbal communication skills

3. Includes goals/objectives for organizational skills and other behaviors that underlie success in a general education classroom (independently completing a task, following instructions, asking for help, etc.)

IDEA indicators

4. This objective is able to be measured in behavioral terms

5. The conditions under which the behavior is to occur is provided i.e. when, where, with whom

6. The criterion for goal acquisition is described i.e. rate, frequency, percentage, latency, duration as well as a timeline for goal attainment is described specifically for objective

Following the initial consultation, we ask teachers to update the IEPs with the new goals developed from the consultation. To test whether the features described above actually were incorporated and changed as a function of COMPASS, we analyzed IEPs from teachers who received COMPASS in both RCTs. We had access to the original IEPs and to the revised IEPs, which reflected recommendations from the COMPASS consultation. A rater unaware of group assignment evaluated the quality of all IEPs before COMPASS and again for those who received COMPASS using the updated IEP. To make this comparison, we used the evaluation tool to score the quality of the IEPs based on NRC and IDEA standards.


Our basic aim was to investigate whether the initial COMPASS consultation impacted IEP quality. To be able to confidently attribute obtained changes to COMPASS, we asked three interrelated research questions. First, we asked if IEP quality changed, i.e., improved, for students whose teachers received COMPASS? Second, we asked whether IEP quality was higher for the experimental group compared to the control group after receiving COMPASS? Third, we asked whether changes in IEP quality were restricted to IEP elements targeted by COMPASS or were broad and related to overall IEP quality.

For those receiving COMPASS, did IEP quality improve after the consultation? To answer the first question about whether IEP quality changed for students whose teachers received COMPASS, we scored IEP quality for teachers in the experimental condition both before and after COMPASS and compared scores. We repeated this analysis in both RCTs. Overall, we found that IEP quality improved significantly after COMPASS when compared to baseline IEP quality scores for study one (t (13) = −2.7, p = 0.02) and study two (t (27) = −8.6, p = 0.000). That is, as expected, we were able to confirm the IEP quality increased in both RCTs. However, improvement in IEP quality was stronger in the second RCT. One possible reason for this is that we realized during the first study that we needed to spend a lot of time on helping teachers to create high quality goals. Teachers had a difficult time generating high quality measurable goals for social, communication, and independent work skills. Thus, we created a template to make the process easier and more efficient in the second RCT. The Fig. 6.3 shows what we used during the consultation to ensure a high quality goal. Thus, we believe that the use of the template helped teachers to create better goals, and that this was one explanation for the better IEP quality scores in the second RCT. More explanation of the use of the template is provided in our manual.


Fig. 6.3
Template for creating high quality IEP goal

Was IEP quality good or at least better for the COMPASS group compared to the teachers who did not receive COMPASS? For this analysis, we compared IEP quality of the control and experimental conditions. As before we repeated the analysis in both RCTs. Compared to the control condition, targeted IEP quality was greater in the group receiving COMPASS (t (47) = −5.7, p = 0.000) (see Fig. 6.4). This suggests that COMPASS does result in changes in specific quality elements that can be measured. Importantly, IEP quality prior to the COMPASS intervention was similar for the control and experimental groups, suggesting that the obtained improvements after COMPASS were related to the intervention.


Fig. 6.4
Between Group Differences of IEP Quality

Were improvements in IEP quality made across the board (which might indicate the influence of some factor that was not specific to COMPASS) or were improvements specific and related to those elements expected to change as a result of COMPASS? As mentioned earlier our measure of IEP quality assesses elements of IEP quality identified by both NRC recommendations and IDEA standards. COMPASS, however, did not target all aspects of IEP quality, but focused specifically on identifying and crafting measurable goals for the three critical areas identified by the NRC task force. Thus, we did not expect improvements in IEP quality generally but only in those areas specifically targeted by COMPASS. Accordingly, we divided our IEP quality assessment into targeted and non-targeted IEP quality elements. As before we repeated the analyses for both RCTs. As expected, improvements in IEP quality were found only for IEP areas targeted by COMPASS (t(62) = 7.2, p = .000, two-tailed) and were absent in areas that were not targeted (t(58) = −0.44, p = 0.66) in our combined sample. In summary, as expected, COMPASS resulted in improved IEP goals and quality compared to before COMPASS and to comparison conditions, and was limited to areas of IEP quality specifically targeted for improvement. It is important to emphasize that these results were obtained in two separate and independent studies, considerably strengthening our confidence in the findings.

Taken together these results indicate that COMPASS is able to improve goal quality, as measured by IEP quality, that the improvement reflects a level of quality greater than found for teachers not receiving COMPASS, and that the improvements are not general, but specific to areas targeted by COMPASS. These are important results validating our hypothesized theory of change within COMPASS. A further aspect of our hypothesized theory of change is that good, high quality goals are important, in and of themselves, as drivers of teacher behavior, and that the predicted changes in teacher behavior should align with the goals, which should, in turn, positively impact student outcomes.


Thus, our next question was whether IEP quality correlated with child goal attainment change scores. As already noted, we reasoned that if we improved the quality of the IEP, this should result in improved child educational outcomes. We based our hypothesis on several factors. First, improved IEP quality after COMPASS was thought to reflect the careful selection of goals and development of intervention plans with parent and teacher input and shared-decision making and understanding of best practice IEPs for students with ASD. That is, the goals reflect ecologically and empirically valid goals that should align with the shared understanding and interest of all parties and, further should produce increased commitment to and confidence in the goals. Second, goals that reflected the personal needs of the student and were written to be measurable and observable, with a clear criterion level would make progress monitoring easier to conduct. Thus, skill attainment could be more readily observed. Moreover, clear goals are more motivating, because individuals are better able to visualize and understand the specified targets. These might seem like obvious features that would represent all IEP goals, but we found in our paper “Examining the Quality of IEPs for Young Children with Autism” (Ruble et al. 2010) that only 40 % of goals were described in behavioral terms with clear conditions under which the behavior was to occur. In other words, most IEP goals are not clear enough to measure, or to motivate behavior. If they are not able to be measured, then there is no way of knowing how much progress the child has made on their goals. Third, we believe that the activity of teachers who followed through with our suggestion to update the IEP with the new goals reflected adherence and agreement with the process. That is, we believe that teachers with better quality IEPs were not only more likely to embrace the goals, but also more likely to embrace and implement the strategies to achieve them.

Based on the above rationale, there was good reason to expect IEP quality to be related to student outcomes. As a first step, we tested the relationship between IEP quality and child goal attainment change. As we hoped, IEP quality was significantly and robustly correlated with GAS change (r = 0.53, p = 0.025) for study 1. In other words, as predicted, students of teachers whose IEPs were rated of higher quality also achieved greater progress on IEP goals. Moreover, we also found that the nontargeted IEP quality elements were not correlated with goal attainment change scores (r = 0.001, p = 1.0). That is, the helpful impact of IEP quality was only for those aspects specifically targeted by COMPASS. As mentioned earlier, this helps to rule out other non-COMPASS influences on IEP (which likely would produce a general impact on quality) as instrumental in affecting outcomes. Together, these are potentially important findings, since IEPs are a central feature of special education and the elements targeted by COMPASS were also the same features sensitive to positive child educational outcomes.

But there was one critical concern. Our result was from a single small sample study. Could we replicate this relationship in our second study? Thus, it was critical to attempt to replicate the finding in a new study with a new sample. When we did this in our second study, the result was similar (r = 0.64, p = 0.000). Because we were able to replicate this result independently in study 2, this added considerable confidence to our initial findings. We also replicated the finding that nontargeted IEP quality elements were not correlated with child educational outcomes (r = 0.23, p = 0.17). Moreover, using combined data from both studies, the overall Pearson correlation was similarly strong (r = 0.58, p = 0.000, two-tailed). Figure 6.5 visually displays this effect by graphing the GAS change scores for students with IEP quality above and below the sample median. Thus, we have robust evidence that IEP quality is one of the explanations for why COMPASS works.


Fig. 6.5
GAS change by IEP quality

Goal-Directed Behaviors. One possibility for the success of COMPASS is that the activities needed to develop high-quality IEP goals that are measurable and objective, also may be essential for establishing clarity in teacher goal-directed behaviors. Special educators have challenging work. They often have insufficient time to plan and help meet student needs. Demands for accountability and paperwork may interfere with classroom teaching. And feelings of loss of control in designing and implementing curricular practices and innovations may lead to a loss of focus, stress, and burnout (Wisniewski and Gargiulo 1997). The COMPASS process may help counter these challenges by helping teacher’s feel a sense of efficacy and influence in educational decision-making. The goals that result from decision-making are integral for focused behavior; that is, as posited by goal setting theory (Ryan 1970), goals affect action. In fact, the more that goals are clearly defined, well-specified, and time-limited, the better the stage is set for task performance. Locke and Latham (2002) refer to four ways in which goals affect action: (a) by drawing and maintaining effort toward activities associated with the goals, (b) by increasing effort, (c) by influencing persistence, and (d) by affecting indirect behaviors of excitement, discovery, and use of task-relevant knowledge and strategies.

The goal setting activities within the initial COMPASS consultation embed these important actions of goal development and goal measurement. Moreover, the coaching sessions provide additional features critical for goal attainment. That is, for goals to be obtained, feedback toward progress is essential (Locke and Latham 2002). Coaching sessions include performance feedback within the set activities of progress monitoring. However, future research is needed to carefully assess and test the degree to which COMPASS actually includes and promotes these aspects of goal setting and their impact on outcomes. For example, to test this in a future RCT, we would need to assess and analyze the relationships among the following variables: teacher goal attainment self-efficacy, teacher ratings of effort toward each goal, time spent on each goal, perception of receiving feedback on each goal, and helpfulness of that feedback.

Only gold members can continue reading. Log In or Register to continue

Jun 29, 2017 | Posted by in PSYCHOLOGY | Comments Off on COMPASS Intervention Quality and Active Ingredients
Premium Wordpress Themes by UFO Themes