Using Progress Monitoring to Support Students Exiting from Special Education Services

Accurate identification of learning disabilities is a challenge at best. Currently in the US, of the 14% of students who receive special education services, 33% are for learning disabilities (i.e., approximately 4-5% of all students) (National Center for Educational Statistics (NCES)). The National Institute of Health (NIH) suggests the prevalence of students with learning disabilities is 7.8%. Methods of identifying learning disabilities have also changed over the past 10 years.

Although guidelines vary from state to state, nationwide, use of a severe discrepancy to identify learning disabilities has been deemed invalid and discouraged, if not disallowed. There is also evidence that unscientific instructional practices lead to student academic deficits that are inaccurately diagnosed as learning disabilities (Reschly, 2014). Best practice for disability identification includes emphasis on using treatment-relevant data from a multi-tiered system of supports (MTSS) process. Some districts and practitioners opt for a ‘strengths and weaknesses’ approach to identifying learning disabilities. Given the inconsistencies in estimated prevalence rates, inconsistent method of diagnosing, and potential for ‘false positives’, there needs to be a careful process to reconsider and exit students from services who might not exhibit a learning disability when provided with effective instruction within an MTSS.

Re-evaluation and Declassification

IDEIA requires that students with IEPs be re-evaluated at least every three years to formally review diagnostic questions and efficacy of special education programming. According to the US Department of Education, from 1999 to 2002, 9% of students ages 6 to 12, designated as having learning disabilities, were declassified. Most states have guidelines (with varying degrees of specificity) for how learning disabilities are initially diagnosed, but there are few guidelines to determine when students should be declassified.

Declassification of special education services is a high-stakes decision. Students who have learning disabilities, but are not provided appropriate supports, may be at risk for disciplinary problems and dropping out of school. At the same time, districts have an obligation to provide students with the least restrictive environment. Legally, students must undergo a re-evaluation process in order to be declassified from special education services. It is up to the multidisciplinary team to determine assessments that address diagnostic and programmatic questions during re-evaluation review. This article will focus on MTSS assessments that help to demonstrate a case for or against presence of a learning disability, through an instructionally relevant, database decision-making process, including response to tiered supports.

Quality IEP Goals/Quality Progress Monitoring

IEP goals are required to indicate present levels of academic and functional performance and how they relate to the child’s performance in the general education curriculum. Goals must be relevant to the child’s need as a result of the disability, enable progress in the general education setting, be measurable, time-bound, and related to student instructional needs. In addition, IEP goals need to be measured at regular intervals. It is essential that data collection be feasible so that educators are not taking excessive time from instruction for assessment. Parents should be informed of how and why the goal was set, what it means when their child achieves it, and be part of the problem-solving process when it is not met.

Curriculum-Based Measures (CBM), a recommended assessment within MTSS, are reliable, valid, simple, quick, sensitive to change, and have norms for student performance (fall, winter, and spring percentiles), as well as norms for growth. Some CBMs require individual administration (e.g., curriculum-based measures in reading, number identification fluency), and only take about one or two minutes to conduct. Others are computer-based, and while individually administered, can be administered in a group setting (e.g., CBMmath Automaticity) making them very feasible for frequent (weekly) progress monitoring. Given the sense of urgency for students diagnosed as having learning disabilities to accelerate growth (i.e., “catch up” growth), regular progress monitoring must be a priority.

CBM in reading is perhaps the most widely recommended progress monitoring measure used within an MTSS. CBM for reading includes an oral reading fluency score indicating the number of words the student read correctly in one minute. When CBM reading progress monitoring data are highly variable from week to week, it may take up to 12 weeks of monitoring before obtaining a reliable rate of improvement for making decisions (Van Norman, Christ & Newell, 2017). Having multiple equivalent passages that are carefully developed to be equated can reduce variability in data. The FastBridge passages were developed to be highly consistent within each level (Ardoin et al., 2013). FastBridge also offers the FAST Projection™ line which uses conditional probability to predict future outcomes and thereby lessens the impact of variable data so that sound decisions can be made with six data points over at least six weeks. These qualities allow educators to make timely decisions to problem solve when interventions are not resulting in goal-level progress.

A challenge for educators is determining realistic yet ambitious criteria for approaching and meeting grade level standards. Similarly, determining whether progress data are within a certain range of expected progress can be difficult. For students with learning gaps, including those with learning disabilities, the primary goal is to accelerate learning so that students can close the gap. Schools vary in how they select learning goals for individual students. Nonetheless, students whose goal is to catch up to their grade-level peers must learn more content at a faster pace in order for this to happen.

For example, students participating in scientifically based Tier 1 (core) reading instruction as well as 30 minutes per day of supplemental intervention, might be expected to demonstrate reading growth at the 75th percentile rate of improvement compared with the typical growth of all students. FastBridge provides growth norms for all measures.

It is important to consider the need for accelerated learning in order for students to make catch-up growth. Some schools may set growth goals to be “at least average” (i.e., 50th percentile) relative to peers. Other schools use end-of-year expectations (i.e., benchmarks) and set progress monitoring goals accordingly. For students whose fall start scores are quite low, goals based on benchmark scores (typically somewhere above the 40th percentile) results in goals that are rarely attainable, resulting in an inaccurate perception of failure (Van Norman, Christ & Newell, 2017). Furthermore, when considering decisions about resource allocation, it is rare that a school can provide meaningful intensive Tier 2 and 3 supports to 40 percent of their students (or many more in the case of a low-performing school/district). Also, requiring students to be ‘at benchmark’ before they are dismissed from special education potentially creates a scenario in which many students in general education could be underperforming students receiving special education supports. For these reasons, states and Local Education Agencies (LEAs) should set realistic but ambitious guidelines for students receiving tiered supports as well as IEP goals with these considerations in mind.

Special Education Exit Data Criteria

When asking whether declassification is warranted, a first question to ask is: how did the student get into special education in the first place? The dual discrepancy model (Reschly, 2014) asks:

A. Is the student discrepant from peers, and
B. Is the student making less than expected progress?

Universal screening data can be used to answer the first question and progress data can be used to answer the second one. These assessments allow educators to know if a specific student’s skills are significantly below peers and whether the progress data indicate enough growth to close the gap through accelerated learning. Specifically, when a student’s current performance is on a par with grade-level peers and the progress data show at least typical growth, then the student might no longer need special education. It is up to state and/or LEA guidelines to develop criteria for ‘expected progress’ and ‘closing the gap’.

Confirmation that adequate instruction has been in place is also an essential component to data based decision making and problem solving. Simply meeting one’s IEP goal may not be adequate to demonstrate that the student should be exited from services. Maintaining gains to grade level standards without special education supports is a key consideration. And, all decisions about exiting special education services must be made by the full IEP team using recent data.

Post Release Monitoring

A system should be in place by which students who stop receiving special education services (or MTSS tiered supports) continue to participate in progress monitoring. Such data will make it possible to confirm whether the student can be successful without significant support. If students make less than average growth without special education support, it may be determined that it was the intensive intervention afforded through special education that resulted in academic success. MTSS problem solving and intervention would still be important to determine whether or not these students can achieve adequate growth through general education support.

How to Determine If a Student Is Ready to Exit Special Education Supports

  1. Develop a district policy that is in line with specific state guidelines for disability classification and declassification.
  2. Use high-quality progress monitoring tools, administered weekly, in areas of need, as part of a student’s IEP goal. Set IEP goals for students to close gaps (at least 50th percentile rate of improvement). Decisions about progress should be reviewed and needs addressed every 6–8 weeks.
  3. Develop district guidance for levels of performance that suggest a student is in the range of meeting ‘grade-level expectations’ (e.g., near average performance on universal screening and progress monitoring in areas addressed by IEP).
  4. When students are not making progress that closes gaps, convene the IEP team to revise goals and instruction and problem-solve other factors that may be responsible for lack of growth.
  5. When students are approaching the expected grade-level performance, and they are making progress towards closing the gap, initiate a re-evaluation. Use MTSS data and other information, as required by state guidelines, to declassify students. The district may determine declassification supports.
  6. Continue progress monitoring for a specified period of time after the student exits special education. If progress slows, this suggests that special education services were necessary for accelerated progress. A problem-solving process should attempt to improve performance within general education (MTSS). If repeated attempts to address needs through general education support are unsuccessful, special education services may be reconsidered.

About the Author

seth-aldrich

Seth Aldrich, Ph.D.
Dr. Aldrich is a certified bilingual school psychologist, as well as a NY State licensed psychologist. He works with several school districts concerning Multi-Tiered System of Support for academic and behavioral difficulties. He has been a consortium member with the New York State Middle School Demonstration Project (formerly Response to Intervention Technical Assistance Center), and works primarily with English learners (ELs), as well as family court involved youth in his private practice. Seth’s most recent publication is the book, RTI for English Language Learners: Understanding, Differentiation and Support.

References

Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1-18. doi:10.1016/j.jsp.2012.09.004

Reschly, D.J. (2014). Response to intervention and the identification of specific learning disabilities.  Topics in Language Disorders, 34 (1), 39–58. 

Van Norman, R.R., Christ, T.J.,  & Newell, K.W.  (2017). Curriculum-based measurement of reading progress monitoring: The importance of growth magnitude and goal setting in decision making, School Psychology Review, 46(3), 320-328, DOI: 10.17105/SPR-2017-0065.V46-3

Schedule a Demo