Universal screening is the administration of a reading, math, or social-emotional behavior assessment for the purpose of identifying students who may benefit from additional instructional support or intervention.
Screening information can also help educators determine appropriate resource allocation at the building or district level.
Screening is a universal practice within a multi-tiered system of support (MTSS) that occurs several times throughout the school year: fall, winter, and spring. In other words, all students are assessed during designated screening periods. Screening is part of the problem identification stage.
While fall and winter screening data allow educators to identify and organize groups of students for progress monitoring while receiving intervention, spring screening data can serve additional purposes. For example, spring screening data can be used to
- Provide evidence regarding intervention effectiveness
- Evaluate instructional programs
- Determine resource allocation (including assignment of students to groups for the following school year)
- Modify curriculum and instruction; and
- Monitor overall student growth throughout the academic school year
As a follow-up to spring screening, and similar to the winter screening period, school teams can apply the problem-solving process to make informed decisions about both core instruction as well as individual student learning needs.
Table of Contents
- What Is the Purpose of Universal Screening?
- Planning for Spring Screening
- Understanding Your Spring Screening Data
- Other Examples of Using Spring Data
- Planning for Fall Exceptions
What Is the Purpose of Universal Screening?
Universal screening has become a common practice in many schools. This screening involves having all of the students in each grade complete the same assessment.
Traditional approaches to universal screening have typically included conducting the assessments three times a year: in the fall, winter and spring. However, research about screening indicates that perhaps it is not necessary to screen all students three times a year, and less frequent screenings can provide enough data about student performance to guide instruction.
There are two main purposes for universal screening: program evaluation and identifying students needing assistance.
Screening data can be used to show the effects of Tier 1 core instructional programs. Examining the effects of core instruction is an important part of a Multi-Tiered System of Support (MTSS).
Only when core instruction is effective will other supports be helpful. Core instruction is typically considered effective when 80% or more of students attain the grade-level learning goal at each screening period. When this is the case, additional supports for students who need them can focus on supplementary instruction.
When fewer than 80% of students have met the benchmark, it is important that efforts be put in place to strengthen core instruction as well as provide supplemental interventions for those students with the greatest difficulty.
Students Needing Assistance
The second purpose of universal screening is to identify which students might need extra help. Importantly, a student’s screening score must be compared with other sources of information in order to confirm that the score is accurate. Comparing multiple sources of information also helps teachers identify each student’s specific learning needs.
It is likely that even when fewer than 80% of students have met the benchmark goal, there will be some students who are very significantly behind their peers. For such students, immediate assistance alongside ongoing efforts to improve core instruction is important.
Research on the Benefits of Universal Screening
Screening studies have documented that such data are predictive of later student performance (Eklund, Kilgus, von der Embse, Beardmore, & Tanner, 2017; Kettler & Albers, 2013).
have reviewed a number of different types of screening measures, including Computer-Adaptive Tests (CAT), Curriculum-Based Measures (CBM), and rating scales (Glove & Albers, 2007; Kettler & Elliott, 2010; Salinger, 2016).
Other research has examined the frequency of universal screening and indicated that less frequent screening for some students is just as effective but with less interruption to instruction (Klingbeil, Nelson, & Van Norman, 2017; Stevenson, 2017; Van Norman, Nelson, Klingbeil, 2017; VanDerHeyden, 2013).
Based on these findings, it appears that using spring data to group students for the following school year, based on their individual learning needs, is one way to reduce testing time while increasing intervention access at the beginning of the school year.
Identifying the Right Screening Schedule
There are pros and cons to screening all students three times a year (tri-annually). Benefits include:
- Ongoing universal data
- Information from the beginning and end of each school year
- Growth indicators
- More information to share with parents
Despite these benefits, there are also limitations from tri-annual screening:
- Takes time from instruction
- Not needed for all students
- Might not provide necessary details for intervention
Given these pros and cons associated with tri-annual screening, it is important to compare tri-annual screening with other schedules. The following table lists possible outcomes when less frequent screening is conducted:
Although each screening schedule has some benefits and drawbacks, the one that can help to ensure that students start intervention immediately when the school year begins is the winter and spring schedule.
With this schedule, the school does not conduct a fall screening of all students but, instead, uses the prior year’s spring data to group most students for instruction. Both teacher assignments and small group intervention plans can be made using a combination of the spring data and other records for returning students.
Fall screening can still be conducted for any new students so that teachers will have data to inform the best instructional placements.
Planning for Spring Screening
Assessments may only be administered on one occasion per student during the designated screening period. It is also important to know that all screening periods are identified and set by your district manager at the beginning of the year.
Within the FastBridge system, any scores collected during the screening period will be color-coded according to school norms (i.e., percentile groups) on the various reports. Scores collected outside of the screening period display in gray on the reports.
FastBridge offers a suite of screening assessments, including the following:
- CBMMath (Automaticity, CAP, and Process)
If the assessment provides a composite score, assessments highlighted in green should be administered during the spring screening period. These assessment subtests may be different from those administered in the Fall or Winter screening period. See below for an example of earlyReading.
Review Your Roster
Before the spring screening period, review your roster to ensure that all students have been appropriately uploaded. If there are students who need to be added, contact your District Manager.
Organize Your Schedule
Screening periods, especially at the end of the school year in the spring, can seem overwhelming as many students take multiple assessments during each screening period.
In addition, spring is oftentimes an assessment-heavy time of the year in regard to state-wide testing and end-of-the-year testing.
It is important to consider:
- How long each assessment takes
- The setting in which each assessment is administered (e.g., one-on-one, or group setting)
- The method in which the assessment is administered (e.g., tablet, computer, or paper/pencil); and
- The environment in which you will administer the assessment (e.g., the computer lab, the classroom, etc.)
The more detail in your schedule, the better! Identify screening schedules by grade level and classroom. Identify quiet spaces with minimal distractions in the school building for students to complete their assessment(s), and provide signage to alert school community members that assessments are being administered.
Finally, although a finely tuned schedule is best practice, it is not complete without planned makeup screening dates for those students with unavoidable absences or those needing extended time to complete an assessment.
Complete (or Refresh) Your Training
Educators administering spring screening measures should be sure to be up-to-date on all training modules and, if using FastBridge, the certification quizzes available within the platform.
Notify Families in Advance
Families can serve as a source of support during spring screening, especially for those students who may feel anxiety near the end of the school year.
It is important to send home reminders about the spring screening period to families in advance. This allows families to ask questions or discuss concerns before the screening period begins.
Identify an On-call “Point Person”
Despite planning, organization, and best practice in preparing for spring screening, technical, logistical, or Internet-related issues will occur.
It is helpful to have a designated educator in the building to problem-solve when these issues arise. Discuss with your building team plans for communication throughout the testing/screening period.
Provide a Quiet Activity to Those Who Finish Before Others
In some instances, students testing in a classroom with other students may finish earlier than others.
Plan ahead and organize a quiet activity for students to engage in if they complete an assessment before others. This will help to maintain a quiet testing environment with few distractions.
Encourage a Growth Mindset
Remind students that assessments are an opportunity to show their learning throughout the school year. For older students:
- Encourage goal setting
- Review test-taking strategies; and
- Discuss simple steps to take the day and night before the assessment (i.e., getting sufficient sleep, eating a healthy breakfast, etc.).
Understanding Your Spring Screening Data
In order for you to get the most out of your spring screening data, it is important to consider the following for all students:
- How did the students perform in the fall and winter? (i.e., what were their ability levels at the beginning and middle of the school year?)
- How many students did or did not meet grade-level expectations at prior screenings?
- How much growth did students make between fall to winter and winter to spring screening periods? (i.e., what were their rates of improvement [ROI])?
- How did the students in each class perform compared to others in their class, school, district, and nation?
Considering only one of the above questions can drastically limit your understanding of a student’s performance on a screening measure. For example, although growth is important, it does not provide a full representation of a student’s achievement level.
More specifically, a student who performed far above grade level expectations at the fall screening but showed no growth across the course of the school year has different needs than a student whose fall score was below benchmark and had no or little growth over the course of the school year.
How Can I View My Spring Screening Data?
FastBridge offers several convenient reports for viewing and interpreting spring screening data, including the:
- Group Screening Report
- Individual Benchmark Report; and
- Group Growth Report.
In addition, individual student performance details on spring screenings can be viewed using the Individual Skills Report. Although each report summarizes spring screening data, these reports serve different purposes and therefore help you to answer different questions.
For example, the Group Screening Report allows educators to review the overall performance of an entire class for each screening period.
In other words, what percentage of students met their benchmark learning goals?
In examining spring screening data, the Group Screening Report provides scores from previous screening assessments, allowing for easy, side-by-side comparison. Here is a sample aMath Group Screening Report.
Similarly, the Individual Benchmark Report displays screening scores across multiple screening periods and multiple school years allowing for easy comparison to both benchmark scores, as well as comparison to evaluate growth over time.
Finally, the Group Growth Report allows users to identify which students are in need of additional instruction/intervention in order to meet benchmark learning goals.
Although interventions and instruction will likely not be modified, terminated, or initiated as a result of spring screening scores, these scores can provide a body of evidence for the upcoming school year in regard to planning for student needs in the next school year.
Here is a sample Group Screening Report.
Did My Students Meet Their End-of-Year (EOY) Goals?
The spring screening data will show whether students met the end-of-year goals. As shown above, the Group Growth Report allows you to view all students’ end-of-year goals and compare them to the spring screening benchmark goals.
Depending on a student’s previous scores (i.e., level of risk), it is possible that the end-of-year goal will vary.
For example, without modifying any of the report settings (the report defaults to the “next highest benchmark setting”), a student in the high-risk category will have an end-of-year goal that is at the some-risk benchmark cut point.
And a student in the some-risk category will have an end-of-year goal that is at the low-risk benchmark cut point. However, these goals might have been modified to fit each student’s needs.
How Much Growth Did My Students Make This Year?
Once the spring screening period terminates, the Impact Report allows you to evaluate student growth throughout the school year. The Impact Report provides a convenient way to view fall, winter, and spring screening scores, side by side.
This color-coded report provides information about all students’ skill progression for all screening periods, while also identifying the proportion of students in a class (or otherwise identified group) at each risk level. Here is an example of the Impact Report.
The Individual Skills Report is the best resource for understanding each student’s performance and skills as the school year ends.
This data can be used to group students for instruction in the following school year. Here is an example of the skills detailed in the Individual Skills Report for CBMreading.
Other Examples of Using Spring Data
What would using spring data look like when using FastBridge reports? Let’s review some examples.
Grade-Level Group Screening Report
Here is an example for fifth-grade aReading scores. This report must be run by a specialist, school, or district manager. It provides a summary of student screening scores for each class in a specific grade.
This report can be a first step in reviewing scores and identifying needs. Notice that one of the teachers, Ms. O’Brien, had students whose scores were much higher throughout the year.
The other teachers had students with more variable and lower scores. This finding could lead to a discussion about whether the instruction provided in each class was the same or different.
Class-Level Screening Reports
The fifth-grade teachers could then look at the Group Screening Reports for each class to learn how many students in each class did or did not meet the benchmark goal. This information would help further the conversation about whether changes in core instruction are needed.
In addition, the teachers can talk about how many students demonstrated different risk levels and what types of instruction they need.
In addition to planning for next year’s core instruction, the teachers can identify which students have similar learning needs and would be able to participate in the same intervention groups.
In some cases, the teachers might already know what instruction a student needs. In other cases, they might want to examine other reports to get more details.
Individual Skills Report
This report provides a breakdown of the students’ strengths and weaknesses on the screening assessment. The specific content included in the Individual Skills Report varies according to the assessment.
For example, for CBMreading, this report includes the specific words that a student reads correctly and incorrectly.
Below is an example for a fifth-grade student in the class shown above who took the aReading assessment.
The aReading version of this report breaks down the student’s performance into the Common Core State Standard (CCSS) reading skill areas and provides a list of skills that the student has mastered, is developing, or are in the future.
The CCSS English Language Arts skill areas are:
- Foundational Skills
- Reading Literature
- Informational Reading
The above student’s aReading score and skills analysis indicate a need for significant Foundational Skills instruction.
For this student, it would make sense to create a fall schedule that incorporates daily intensive foundational reading instruction as well as opportunities to practice those reading skills in decodable texts matched to the instruction.
Planning for Fall Exceptions
Using spring data is likely to work for most — but not all — because some students will move over the summer. For this reason, it is important to have a screening procedure for new students.
One method is to have the student complete the screening assessment at the time of enrollment. For example, the student could complete the screening activities while the parent fills out enrollment forms.
Another group of students for whom the winter and spring schedule might not work is kindergarten. This is because this is the very first school level and kindergarten students will not have spring data from the previous year.
Depending on whether your district’s kindergarten program is mandatory, it might be helpful to gather fall screening data from first graders as well.
In general, the longer that students have been in school, the more data we have available regarding their school progress.
For this reason, using a screening schedule that includes conducting fewer screenings as students move to higher grades could be effective. The following table provides suggested screening frequencies by grade levels.
For all grades, except at the beginning of kindergarten, the prior year’s spring data can be used as part of the process of grouping students for the following year. For kindergarten and first grade students, the fall assessment can take place at a screening conducted before school starts, or on the first days of school.
For high school students, screening beyond ninth grade is not needed because students who are on track will generally stay on track and those who need help are generally already identified based on prior assessments and grades.
Still, if a student moves into the high school with limited, or no, prior school records, an initial screening can be done to identify what classes would be most appropriate.
Spring screening data provide important information for understanding all students’ growth over the school year as well as whether additional support is needed to help students in the following school year.
FastBridge has a number of reports that can be reviewed in order to understand and make plans for each student’s future learning needs.
Eklund, K., Kilgus, S., von der Embse, N., Beardmore, M., & Tanner, N. (2016;2017;). Use of universal screening scores to predict distal academic and behavioral outcomes: A multilevel approach. Psychological Assessment, 29, 486-499. doi:10.1037/pas0000355
Glover, T. A., & Albers, C. A. (2007). Considerations for evaluating universal screening assessments. Journal of School Psychology, 45, 117-135. doi:10.1016/j.jsp.2006.05.005
Kettler, R. J., & Albers, C. A. (2013). Predictive validity of curriculum-based measurement and teacher ratings of academic achievement. Journal of School Psychology, 51, 499. doi:10.1016/j.jsp.2013.02.004
Kettler, R. J., & Elliott, S. N. (2010). A brief broadband system for screening children at risk for academic difficulties and poor achievement test performance: Validity evidence and applications to practice. Journal of Applied School Psychology, 26, 282-307. doi:10.1080/15377903.2010.518584
Klingbeil, D. A., Nelson, P. M., Van Norman, E. R., & Birr, C. (2017). Diagnostic accuracy of multivariate universal screening procedures for reading in upper elementary grades. Remedial and Special Education, 38, 308-320. doi:10.1177/0741932517697446
Salinger, R. L. (2016). Selecting universal screening measures to identify students at risk academically. Intervention in School and Clinic, 52, 77-84. doi:10.1177/1053451216636027
Stevenson, N. A. (2017). Comparing curriculum-based measures and extant datasets for universal screening in middle school reading. Assessment for Effective Intervention, 42, 195-208. doi:10.1177/1534508417690399
VanDerHeyden, A. M. (2013). Universal screening may not be for everyone: Using a threshold model as a smarter way to determine risk. School Psychology Review, 42, 402.
Van Norman, E. R., Nelson, P. M., & Klingbeil, D. A. (2016;2017;). Single measure and gated screening approaches for identifying students at-risk for academic problems: Implications for sensitivity and specificity. School Psychology Quarterly, 32, 405-413. doi:10.1037/spq0000177