Skip to content

Top 3 Myths About Educational Assessment, Busted!

January 31st, 2019

Educational assessment myths can keep educators from their best teaching. One reason myths continue circulating despite being “busted” is that intuitively, and even practically, they can seem so right. And there are often fragments of truth contained in them.

In this blog post, I’ll address the 3 most commonly held myths about educational assessment, the research that “busts” these myths so we know better, and outline the next steps we can take so we can begin doing better.

Myth #1: Standardized assessments of any kind don’t help students learn.

Why this seems true:

As a result of testing requirements in the No Child Left Behind act, phrases like “teaching to the test” and schools being “data-rich, but information poor” (i.e., D.R.I.P)” have come to describe education systems that churn out test scores for purposes of compliance with federal, state, or district mandates rather than informing instruction.  

As many as 60% of teachers report that their students spent a week to 10 days taking district or state-mandated tests. In addition, there is usually some time for student test preparation, which can be as many as 12-14 additional school days (Rentner, Kober, Frizzell, & Ferguson, 2016).

Despite this, professional development to improve educator skills in using data to inform instruction is minimal in terms of hours and relevance to the skill needs of students in classrooms (e.g., Moore & Shaw, 2017; Mandinach, Rivas, Light, Heinze, Honey, 2006).  It’s like student learning is being sacrificed at the altar of mandated tests with no clear purpose or real quality behind the assessments, only accountability for reports and lost instructional time.

The mythbuster:

The kernel of truth is that if the standardized assessment does lack purpose, quality, takes days or weeks, and has no connection to decisions teachers will make, it is a waste of time. However, evidence from research about effective teaching is clear that educators need quality formative and summative assessments in order to identify what their students know and what they still need to learn (Salvia, Ysseldyke, & Witmer, 2017).  

In fact, most teachers do not want to completely eliminate educational assessment (Moore & Shaw, 2017). Instead, they want to reduce the amount of testing time, and increase meaningful, timely assessment to evaluate whether their teaching is leading to student learning. Standardized tests do help students learn if they are meaningful, valid, and the results are relevant and immediately actionable for teachers in the classroom.

What to do now:

  1. Use assessments with strong reliability and validity for their intended purpose and which provide immediate results to teachers.  For example, extensive research has verified FAST™ assessment results can be quickly linked to the type and kind of instruction needed.
  2. Provide high-quality professional development and on-going coaching support for teachers to learn how to administer and score the assessments, as well as interpret and apply the data directly to their curriculum and content standards.
  3. Explain why the specific assessment is needed, (i.e.,  “So I know how to teach you better”), and how the results will be used to improve learning for students.

Myth #2: Computer Adaptive Assessments (CAT) and Curriculum-Based Measures (CBM) are unfair because they assess students on content they haven’t yet learned.

Why this seems true:  

CAT assessments, by design, are used for screening to identify each student’s current instructional level and learning needs. The CAT administers items to find the score that is the “best fit” for a student’s general knowledge of that content area. This means that students could be administered items at their grade level, above their grade level, and even below their grade level, depending on each individual item response.  For example, if a 4th-grade student starts with 4th-grade questions but gets the first ones correct, the CAT will present increasingly harder items to determine what above-grade-level skills the student has. This means the student will likely get items with content that has not yet been taught or learned.

CBM measures are brief, timed assessments that show a student’s relative progress toward specific learning goals in basic reading or math skills (Deno, 1985). They are used for screening and progress monitoring. CBM assessments are typically administered at grade level difficulty which means that they include content from the skills students are expected to know at the end of their current school year.  For example, a fourth-grade student will usually be given CBM assessments that are at the level of spring of fourth grade (screening) or slightly easier (progress monitoring) even if the student is being assessed in September of fourth grade.

The mythbuster:

The bit of truth in this myth is that students will be assessed on content they may not know; however, an understanding of the purpose of a CAT or CBM assessment will shed light on why this is actually helpful for teachers to inform instruction.  

First, CATs use a technique called Item Response Theory (IRT), a sophisticated method of assessing the validity of measurement scales such as those for reading and math (Ford, 1980). One thing IRT allows is for the assessment to find the sweet spot of what a student does and does not know so the teacher can know what to teach next.  With a CAT, a student will first be asked about content he or she has previously learned to document current skills.

However, in order to find out what a student doesn’t know (and needs to learn), the student must be asked at least a few test items that are too hard to answer; hence, we learn what they don’t know. It’s for a good, psychometric purpose.  Importantly, if a student answers harder items incorrectly, then the CAT will adjust item difficulty to include easier items.

The purpose of CBM is to assess general reading (or math) ability and determine if students are on-track to achieve end-of-year grade-level outcomes.  Administrating a CBM means doing a quick assessment of where a student is at relative to where she or he is expected to be at the end of the year. The purpose is to figure out how close or far away the student is from that target, and then provide instruction that keeps the rate of learning on pace such that the student will end the year at grade level. 

Here is an analogy: Let’s say you join a team that plans to run a 5K race. The goal, or expected target, is for all team members to finish in 25 minutes or less. You might start by figuring out your current pace and then set your goal for 5 minutes per kilometer (or 8 minutes per mile). Then you track your progress weekly based on how quickly you are closing the gap between your current pace and the goal pace.  This is akin to what we are doing with CBM assessment because we document week-to-week how well a child is progressing toward end-of-year learning targets.

What to do now:

For this myth, knowledge is power for teachers and students. Understanding the purpose of the assessments and the “why” behind their design, give understanding and common language to how these assessments are instructionally useful, meaningful, and fair.

A key to shared understanding is simply sharing information.  For example, tell students what to expect during a CAT. Specifically, that they may get items that seem easy, just right, and too hard. All are ok. These questions help teachers know how to teach them best. We have to learn what they know AND what they don’t know. In nearly all cases, students handle getting items that seem “too hard” in stride.

Bonus step: share the results with students (and families) soon after the assessment, and how these will be used for instruction.

Myth #3: CBMreading encourages kids to be speed readers, not good readers.

Why this seems true:

CBMreading involves having students read out loud for 1 minute while the teacher records any reading errors. It is logical that students (and teachers) will interpret this assessment as one that emphasizes having students read as fast as they can, so they get the highest score possible.

It might seem like this is relatively pointless information because just assessing how fast a child reads doesn’t tell you anything about other important reading skills, like comprehension, which is the whole purpose of reading in the first place.

The mythbuster:

The National Reading Panel (2000) stated that “fluent readers are able to read orally with speed, accuracy, and expression.”  All three of these features are essential for effective reading and are measured through a test of oral reading fluency.  

To understand this think of someone who is fluent in a foreign language. Such fluency does not mean speaking the language as quickly as possible. Instead, fluency means speaking accurately, with expression, and a pace that is automatic and communicates effectively. Assessments of oral reading fluency, like FAST CBMreading, are standardized sets of passages and administration procedures where the number of correct words per minute is the oral reading fluency score.  

The standardized administration procedures never ask the student to read quickly or “as fast as you can.” Instead, the instructions ask the child to do “your best reading,” which includes attention to accuracy, expression, and rate. In fact, the administration instructions state that if a student does start speed reading, the administrator is to stop the timer and say, “This is not a speed reading activity. Do your best reading.” And, the examiner records not only the student’s rate, but also the accuracy and phrasing.

What to do now:

The myth about oral reading fluency assessment as a speed reading test drives home three critical points for educator professional development.

  1. First, educators who teach reading need to be fully informed about the importance of reading fluency as a critical skill for reading comprehension. “If the text is read in a laborious and inefficient manner, it will be difficult for the child to remember what has been read and to relate the ideas expressed in the text to his or her background knowledge” (National Reading Panel, 2000).  
  2. Second, ensure educators are well-trained to deliver assessments and to interpret CBM assessments properly. They should not set students up to think they should speed read the passage, even if it is timed. Prepping students about what “best reading” is – accurate, with expression, at quick pace where each word is clearly heard – can also help educators get the best data about this important reading skill.
  3. Third, educators need to know that decades of empirical research has demonstrated that oral reading fluency is an indicator of reading comprehension in that it is highly correlated with direct measures of reading comprehension (e.g., Wayman, Wallace, Wiley, Ticha, & Espin, 2007). Reading comprehension is much more complex than “just oral reading fluency” and there no claims that CBM is a direct measure of reading comprehension. But, the research is clear: CBM is a reliable and valid indicator of overall general reading skills.

 

References

Deno, S. (1985). Curriculum based measurement: The emerging alternative. Exceptional Children, 52, pp.219-232.

Eunice Kennedy Shriver National Institute of Child Health and Human Development, NIH, DHHS. (2000).  Report of the national reading panel: Teaching children to read: Reports of the subgroups (00-4754). Washington, DC: U.S. Government Printing Office.

Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum Associates.

Mandinach, E. B., Rivas, L. Light, D., Heinze, C., Honey, M. (2006). The impact of data-driven decision making tools on educational practice: A systems analysis of six school districts. Annual meeting of the American Educational Research Association, San Francisco, April 9, 2006.

Miura Wayman, M., Wallace, T., Ives Wiley, H., Ticha, R., & Espin, C. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41, 85-120.

Moore. R.  & Shaw, (2017). Teachers’ use of data: An executive summary. Iowa City, IA: ACT, Inc.

Rentner, D.S., Kober, N., Frizzell, M., Ferguson, M. (2016).  Listen to us: Teacher views and voices. Washington, DC: Center on Education Policy.

Salvia, J., Ysseldyke, J., & Witmer, S. (2017). Assessment in special education. (13th Ed.). Boston, MA: Cengage Learning.

 

Share This Story