To help answer our questions effectively, we applied the same sampling, recruitment, measures, and randomization procedures for both RCTs, which then allowed us to combine samples to answer secondary research questions (such as the impact of teacher burnout on student outcomes). Both studies also targeted the same groups, special education teachers who were responsible for the Individual Education Programs (IEP) of students with autism aged 3–8 in public schools. About half of the teachers were the primary classroom teacher of the child and the other half were the child’s resource teacher or general education support teacher. All were the primary person responsible for the implementation of the IEP. Sampling and methods were similar across the studies. Teachers were asked to participate at the start of the school year (Time 1). Those who agreed were then asked to provide the initials of the students with ASD they taught to maintain confidentiality. We selected at random one student and then asked the teacher to forward information about the study to the parents and caregivers of the selected student. After parents agreed to participate and a comprehensive baseline evaluation was completed for each teacher–child dyad, the dyads were randomized to the control or experimental conditions. To determine the impact of COMPASS on child educational goal attainment at the end of the school year, a Time 2 evaluation was completed using the same measures applied at the start of the school year. To ensure objective and independent assessment, an independent evaluator blind to participant assignment was used to judge child progress on IEP goals.
With one exception, which is described below in the discussion of the individual studies, the COMPASS intervention was implemented identically in both studies. For both studies, teachers in the experimental conditions received the initial COMPASS consultation and four follow-up coaching sessions. The initial consultation included the teacher and parent and lasted approximately 2.5–3 h. Each coaching session lasted between 60 and 90 min and occurred about every 4–6 weeks and in total was less than 10 h across the school year. The Fig. 4.1 describes the activities of the initial consultation and the activities that occur during each coaching session.
COMPASS initial consultation and follow-up coaching
The key outcome measure for both studies was Goal Attainment Score (GAS) change from baseline to end of the study. We chose GAS as our primary outcome measure for several reasons as discussed in the prior chapter. Specifically, it allowed us to assess intervention outcomes for group design research when children started at different baseline level of skill, had different goals, and had different intervention plans. This idiographic method has been applied in numerous studies and is described in detail in Chap. 3.
Description of Two RCTs
Study 1 (Ruble et al. 2010)
As noted earlier, for the first RCT, the primary goal was to establish proof of concept. For this study, the mean age of the children was 6.1 years and they represented children across the autism spectrum who attended special educational classrooms full time and part time or who were educated in general education classrooms. Children were recruited based on an autism diagnosis. No exclusion criteria, other than no sensory impairments (hearing or visual) were made. Teacher-student dyads were randomized into COMPASS or into a comparison group consisting of services as usual. Figure 4.2 shows the sequence of events in carrying out the study. Once the teachers, parents, and children completed baseline measures, teachers were randomized to a control or COMPASS group.
Research design for study 1
To validate that the randomization procedure worked as intended, child age, autism severity, IQ, language ability, and adaptive behavior were compared between the control and COMPASS group; no significant differences were found. Overall, the COMPASS group teachers received a little less than 10 h of consultation from the researchers. The control group students received their special education program as usual. The students in the COMPASS and control groups had goals that reflected a social skill, a communication skill, and a learning or independent work skill. To determine the amount of progress children made in their IEP goals, at the end of the school year, child goal attainment change, as measured by final goal attainment scores minus beginning of the year goal attainment scores, were collected by an observer who was independent of the research team and unaware of group assignment. The overall results show that students whose teachers received COMPASS had GAS scores that were significantly higher than those in the control group. Students whose teachers received COMPASS made a 1.5 standard deviation improvement compared to students who were in the control group (see Fig. 4.3). After controlling for Time 1 GAS scores, there was a statistically significant group difference in change from pretest to posttest scores, F (1, 29) = 11.08, p = 0.002, indicating greater improvement in scores for children in the experimental group (M = 5.4) relative to the control group (M = 2.4). Thus, we were able to clearly answer our first question: COMPASS was effective.
Study 1 GAS outcomes
Study 2 (Ruble et al. 2013)
Based on the positive results from Study 1, we were able to ask new questions that could be tested in a different study. For study 2, we asked three new primary questions outlined to the right: (a) Can we replicate our results from Study 1 in new sites?; (b) Is COMPASS still more effective when compared against a more active placebo control—not just special education services as usual?; and (c) What can we do about rural schools or distant sites to better deliver COMPASS? These questions led to a second study. Question one is the replication question, addressing the core issue of whether we can confirm COMPASS effectiveness in a second independent sample and relates to step 3 of the Evidence Ladder (Chap. 1; Fig. 1.2). Question two expands this effectiveness question to add a more active control condition. That is, in study one the comparison was to standard special education, also sometimes referred to as treatment-as-usual comparison. For study two, we wanted to add a more active treatment comparison in addition to standard special education. Question three focused on an important implementation concern, how to provide consulting to distant sites, when travel and face-to-face interventions tend to be difficult. To address question 3 specifically, we added a second experimental condition that tested web-based videoconferencing as a means for coaching classroom teachers. We also asked a set of secondary questions that focused specifically on the WEB condition. We outline these questions after first presenting the answers to our primary questions.