Skip to content

What if a Student Does Not Respond to Intervention?

May 11th, 2018

By: Rachel Brown, Ph.D., NCSP

Many, if not most, efforts to improve student learning outcomes will result in measurable improvements in student performance. Nonetheless, in some cases, a student will not make the growth desired or expected. When this happens, the school team needs to consider what else could be done to assist the student. This blog will review factors to consider before concluding that a student has not responded to intervention, data patterns indicating a lack of response, and next steps that might be possible. It is important to recognize that many variables can contribute to a student’s success in school. In some cases it might be that aspects of the student’s life outside of school interfere with school success. There will always be variables in students’ lives that educators cannot control, however, Hattie (2009) and others have shown that what teachers do in the classroom every day is the single most important predictor of student success. For this reason, this blog will focus on the time and efforts that teachers can control during each school day.

Factors to Consider

Before concluding that a student has not responded to intervention, it is important to review the essential details of the intervention and be sure that all planned steps were implemented accurately. Such consideration should include evaluating intervention integrity, assessment integrity, number of data points, and goal details.

Intervention integrity. This refers to whether or not the planned intervention was implemented as intended. This is important because if the planned intervention was incomplete, it is impossible to know if the outcomes were due to it not working for this student or if the missing steps led to the limited effects. The first step in reviewing integrity is to confirm that the intervention was evidence-based. Evidence-based interventions are ones that have been documented to produce certain results in multiple research and applied settings. Evidence-based interventions (EBI) are important because they include methods with the highest likelihood of helping students succeed. If it turns out that the intervention was not evidence-based, the best next step is to implement an EBI with integrity.

After confirming that the intervention was an EBI, other intervention details to check are frequency, duration, and implementation accuracy. Frequency and duration are important because they determine the amount of time that the student participates in the intervention. Even the best EBI are not likely to be as effective if provided for a very limited number of days or minutes. Frequency is the number of days per week that an intervention is provided. The published research and intervention instructions should indicate the best frequency. For students participating in Tier 2 strategic interventions, 2 or 3 days per week are common. For students participating in Tier 3 intensive interventions, 4 to 5 days per week are recommended. Similarly, the duration of each intervention session matters as well. Again, the duration will be specified in the directions, and longer sessions are more typical for more intensive interventions. Frequency and duration can be thought of as the “dosage” of an intervention, much like the dosage of a medication. The right strength is needed for the treatment to work.

Besides dosage, the actual steps in the intervention need to be checked and confirmed. Most EBI will have an intervention integrity checklist that lists each lesson step. The checklist can be used to make sure that every step has been followed correctly. The best way to use an integrity checklist is to have an observer watch a lesson and record whether each step was followed. Doing so is time consuming and so another method is to have the interventionist complete the checklist after each lesson so that a percentage of correct steps can be calculated. Unless intervention integrity is confirmed, a student’s data cannot be certain to be the result of the intervention.

Assessment integrity. As with intervention integrity, assessment integrity is necessary in order for the data to be interpretable. Assessment integrity refers to whether or not the assessment was administered and scored correctly. For some assessments this is very easy to determine. For example, computer-based assessments are automated and will always be done correctly unless interrupted. Those assessments requiring teacher-based administration need to be done according to the published directions in order to be valid. In the case of progress data, all the scores need to be from alternate forms of the same assessment administered in the same standardized way each time. This standardization makes it possible to compare scores over time.

Number of data points. Having enough data points to draw conclusions is important. With only one or two scores, it is impossible to determine if there is a trend in the data. In general, the more data points, the more reliable the information. Research about academic progress data indicates that somewhere between 9 and 12 data points are needed before the scores can be conclusive (Christ, Zopluoglu, Monaghen, & Van Norman, 2013). That said, if a smaller number of data points indicates that the student has not made any gains at all, an intervention change can be justified. Progress monitoring frequency makes a big difference in how soon data can be interpreted. If progress monitoring is done monthly, it could take 9 to 12 months before the scores can be interpreted. Compare that to the 9-12 weeks necessary for weekly monitoring. For this reason, FastBridge Learning® recommends weekly monitoring for all students.

Goal details. A final detail that should be considered before drawing conclusions about progress data is the student’s goal. The goal serves as the basis for comparing the weekly scores. Goals that are too low or too high could distort data interpretation. For example, a student in the fifth grade who is monitored at the second grade level might have made impressive progress toward the end of year second grade goal. While such improvement is good, this student needs to make additional gains in order to catch up to grade level. Alternatively, a goal might be too large for a specific student to reach during the current school year. In cases where goals are low (or at a lower grade level) or very ambitious, the progress graph might appear that the student is making more or less progress than match the student’s needs. Be sure to confirm that the goal makes sense in relation what could be considered effective progress.

Data Patterns

Once all of the important features of the intervention, implementation integrity, assessment integrity, and goal appropriateness have been checked, then the student’s accumulated progress data can be reviewed. There are three main trends in data that are most common:

  1. Scores improving and on track to reach goal
  2. Scores improving but not on track to reach goal
  3. Scores not improving

Here are examples of each of the above.

Improving and on track. When a student’s data indicate a strong and positive response to the intervention no immediate changes are usually needed.

In the above example, the student was making good progress toward the goal and it should be maintained.

Improving but not on track. In some cases, a student might be making some progress with the intervention, but the scores are not improving at a rate that will let the student reach the goal in the current school year. Here is an example.

When a student has made some progress, but not enough to reach the goal, the best next step is to increase the intervention dosage. This means increasing the frequency and/or duration of lessons so that the student will have more access to the instruction. If the stronger dosage of more intensive intervention results in the student being on track to meet the goal, then that is all that needs to be done. If the resulting data show inadequate response to the intervention, then selecting a different intervention or collecting additional data is the next step.

Not improving. Both of the above examples reflect that a student did respond to intervention. The good news is that most students will respond with positive results when an intervention is implemented correctly. Still, a few students will not achieve the desired results. Below is an example of non-response.

In the above example, the data collected so far indicate that the student has not made adequate progress to reach the goal. Nonetheless, there are only four data points and there were missed weeks for winter break. Best practice is to collect a few more data points and see if the trend goes up, or if the student is not responding. When a student has not responded, the team must decide whether to select a different intervention or to conduct more intensive assessment. The assessment could be part of a comprehensive evaluation for special education services.  

Next Steps

In order for progress data to be useful, accurate intervention implementation and assessment are needed. Once accurate practices have been confirmed, the team can review the graph to determine if the student has responded or not. For students who have responded well to the intervention, it should be maintained and eventually faded. For students who responded some but not enough to reach the goal, increase the intervention dosage. For students whose data indicate a lack of response, implement a totally different intervention or consider if a comprehensive evaluation is needed.

Reference

Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51(1), 19-57. doi:10.1016/j.jsp.2012.11.001

Hattie, John. (2008). Visible Learning. Abingdon, Oxon: Routledge.

Share This Story