Written by Jaime Harris on January 24, 2019
Schools and districts have increasingly turned to data visualization tools to better (and more quickly) understand their student data. At the student level, visualizing data can inform MTSS processes, intervention selection, at-risk identification, and more. At the class, grade, school, or district level, data visualization can help us “find patterns.”
How many times have we heard that (or even said it ourselves)? We know that this is important to do. But what are these “patterns” we’re supposed to be looking for? How do we look for them? How do we know when we’ve found them?
What are some examples of patterns we might find in our student data?
When we say “pattern,” we are looking for recurring events in our data. Here are some examples of things you might look for or see in your different data sources:
- Achievement data: Groups of students consistently outperforming other groups. Subjects, domains, and standards/skills with significantly higher (or lower) performance.
- Attendance: Days, weeks, months, and groups of students with higher or lower attendance.
- Behavior: Locations, times, and days of the week with a higher rate of behavior incidents. Types and severity of incidents that are higher.
When looking for patterns, we want to look at data over time to ensure those events are indeed reoccuring. Anomalies can happen in your data, and we don’t want to make decisions based off those.
How might we get started in searching for these patterns? What are we looking for?
If you or your team are newer to data visualization and analysis, sometimes applying simple filters or groups to your data can be an effective and meaningful place to start hunting for patterns. Here are four types of filters to think about using as you get started.
1) The Obvious: Demographics, School, and Grades
For many of us, the idea of filtering data by demographics (gender, race, ethnicity, socio-economic status, disability, etc.) is not new. Reporting data by demographics and subgroups is part of improvement plans, accountability reporting, and is now expressly required by ESSA.
- With things happening in the world, many districts are monitoring student populations such as immigrant and migrant status. Homelessness and primary night residence are also commonly monitored. Other considerations should be made to monitor for traumatized youth and mental health needs.
- Over time, student populations can shift dramatically—and you may see those shifts impact your data. If your team is monitoring a robust set of demographics longitudinally, it can inform programming and intervention decisions, student services offerings, language services, and staffing needs.
Filtering by school can reveal insights that help share best practices and ensure equitable distribution of resources. Here are some examples:
- If all elementary schools are struggling on a particular standard except for one particular school, your team can investigate whether that building is doing something differently. What instructional approaches are they taking? What do their formative assessments look like? Did they pace instruction differently or do additional reteaching? Or, you could look for pockets of success to find the “why” in those successes.
- You might also see that behavior incidents are higher at a particular school. Perhaps many of those incidents are happening in a specific location due to the building layout, suggesting that more staff are needed in that area.
- For students receiving tier 2 and tier 3 interventions, are specialized staff equitably and responsively distributed across the buildings, given the number of students receiving those interventions by building?
Filtering by grade can be especially powerful across multiple years.
- When looking at the same data for the same assessment, do results look pretty similar from year to year? Or, are we continuing to increase the number of students at/above proficiency?
- Is there a specific grade threshold at which achievement drops? Does it happen over multiple years? If so, it’s possible that there is a pacing issue or curriculum-instruction-assessment misalignment.
2) Layering Student Groups Based on Assessment or Other Data
Grouping students based on Data Source A (whether an assessment, attendance, behavior, social emotional, intervention, etc.) and viewing that same group of students’ data from Data Source B can expose patterns related to instructional rigor and program effectiveness.
- If you create a group of students who were below proficient on a state standards assessment and apply that group to your interim assessment or classroom data, what do you see? If students were passing their interims but not the summative assessment, there may be a rigor or learning target misalignment. Or, it may reveal a lack of stamina and the need for more practice to increase automaticity of skills.
- If you create a group of students with a high number of absences and apply it to their social emotional data, what do you see? Are there students who may need support in feeling engaged or in managing relationships? Are there particular classes those students never miss? If so, that may indicate a close teacher relationship or interesting subject that can leeway into higher engagement.
3) Students on Interventions
Districts invest major staff resources and dollars into their intervention programming, training, and implementation. Sometimes, it can be hard to evaluate the ROI on those types of programs—or, even the ROI on something like summer school.
- Perhaps a new intervention program was implemented at the beginning of the year. Create a group of those students and compare their EOY achievement results to a group of all students not receiving that intervention. What do you see? How did the intervention group perform compared to students not receiving the intervention? Did their rate of growth increase? Do we see evidence that the intervention has a meaningful impact on student learning or not?
- You can also compare Intervention A’s student group to Intervention B’s students. Did one intervention do more to increase student outcomes? Are there any interventions that didn’t make much of a difference? Knowing which programs are not currently impacting student learning can lead to healthy questions about implementation fidelity and whether those programs are a good match for student needs.
4) By Class or by Teacher
As a teacher (or, as an instructional coach or principal), looking at class data at the end of the year can help glean really powerful insights into instruction. What really worked? Where did students get stuck? Although these conversations are typically happening throughout the year—both in teacher self-reflection and as a group in PLCs—seeing data from the year as a whole brings a different perspective.
- What standards did students do really well on, and where did they struggle the most? Was it consistent throughout the year and across assessments?
- Where there a high number a particular type of behavior incident? If so, are there a few classroom management strategies that teacher could try out next year?
- Beyond in-the-moment teaching adjustments, did the teacher make shifts during the year, such as introducing a new method of engaging with parents or providing supplementary content after tests? If so, did those seem to support students?
Filters are a great way to get started looking at data, but it is only one approach of a great many. It’s worth reiterating that it’s important to make sure that these “patterns” are happening over time, and not a single event in your data. And, keep in mind that often patterns should be checked against multiple data sources, depending on the question at hand.
Illuminate Education is a provider of educational technology and services offering innovative data, assessment and student information solutions. Serving K-12 schools, our cloud-based software and services currently assist more than 2,000 school districts in promoting student achievement and success.
Ready to discover your one-stop shop for your district’s educational needs? Let’s talk.