By: Dr. Scott P. Ardoin, University of Georgia
Across the last decade great emphasis and much debate has surrounded the administration of state mandated (SM) tests in public schools. In general they are not well liked and teachers, parents, and administrators often wish that the tests were not part of the current school culture. Test scores can, however, affect the reputation of a school, its administration, and its teachers. As a result, much attention is given to these tests both during, before, and after test administration. Schools generally send home letters to parents asking the parents to make sure that their children get a good night of sleep the night before SM testing and encourage parents to provide their children with a good breakfast. Students are constantly being reminded of the importance of the SM tests, with teachers occasionally putting too much pressure on students. I have unfortunately been witness to students being told of potential negative ramifications (failure, teachers being fired) from their failure to perform well on such a test. Schools also go through great efforts to ensure that the testing environment is conducive to test taking by making sure that all lights in the rooms are working well, hallways are quiet, and tests are administered during times when students are likely to be most alert. Although increasing students’ anxiety to heightened levels is not the best of ideas, taking steps to make sure that students give their best effort on the SM test, that the testing environment is conducive to test taking, and that students are able to be alert and attentive to the test are all excellent ideas. Taking these steps increase the chances that students will do their best and that test outcomes are reflective of students’ knowledge.
Unfortunately, rarely do schools place the the same level of importance on the administration of another type of assessment: universal screening (US) measures (e.g., CBM, aReading). Although screening outcomes are unlikely to hurt a school’s reputation, screening results can be used to help schools improve student outcomes across many types of assessments. Because US measures are administered multiple times across the year, as opposed to only annually, the data can be used for (a) evaluating the effects of the curriculum within grade levels within and across schools, (b) examining whether certain groups of students (e.g., English Language Learners) are making sufficient progress; (c) determining which interventions are resulting in the greatest student gains; and (d) identifying which individual students are in need of supplemental intervention.
Encouraging students to do their best and ensuring the testing environment is ideal for test taking may actually be more important for screening considering that the data are not simply employed as a snapshot of how students compare to a standard or to a normative sample. Because screening data are used to examine growth, and thus a student’s performance in January is compared to that of his/her performance at the beginning of the year, it is important that all variables, including the student’s motivation and directions administered by teachers, remain consistent. Failure to follow standardized procedures might result in data suggesting that students have actually regressed in their skills, when in reality the decrease in performance was due to how and when the test was given. Examples of such factors include: (a) students lacking motivation to do well, (b) having difficulty attending to the tests due to a less than conducive testing environment, and/or (c) employing different strategies when taking the tests due to different directions being given prior to the test. I once met with a set of school administrators who were concerned with decreases in performance across students who were being progress monitored. Following a bit of detection work I discovered that they had recently changed their test administration procedures. At the beginning of the year they would inform students of their previous scores and offer them incentives for beating their previous score. After some debate regarding the merit of this procedure, they had stopped this practice of informing students of their prior score and offering them incentives for beating their prior score. Although the reason for the change in this case may seem obvious, less subtle changes such as different test administrators who either choose to give different directions than are specified by the test or fail to score student performance accurately can make large differences in student data. Given the importance of screening data to the individual student and collective group of students, it is essential that schools take screening assessments seriously.
Listed below are several steps that should be taken to increase the probability that students’ screening scores will accurately reflect their achievement level and their progress across the academic year.
- Encourage students to do their best and let them know that their performance on each test will help teachers to provide instruction that best meets their instructional needs. Following test administration teachers might reflect back on the tests and let students know how the data are being used for making instructional decisions. Celebrate group success by doing such things as making an announcement on the intercom regarding which classroom made the greatest gains from the prior US.
- Let parents know that US testing will be occurring and encourage them to help their children to get adequate sleep the night before and provide their children with a nutritious breakfast. When test scores are sent to parents, thank them for their assistance in helping to prepare their children for the tests. Let parents know how the data are being used for making instructional decisions at the individual and group levels.
- Administrators should emphasize to their staff the importance of following strict protocol during test administration and scoring of the tests. Having teachers audio record sessions and allowing a second person to listen to a sample of the recording to check for accuracy and consistency in testing protocol and scoring can be of great value. Not only does this allow performance feedback to be given to test administrators regarding what they are doing correctly and incorrectly, but simply knowing one’s audio will be listened to encourages adherence to protocol and scoring rules.
- Consistency is key. Ideally, the same person who administers a US test in fall should administer the US test in the winter and spring. Likewise, it is best to administer test in the same location and at approximately the same time of day.
- Be prepared. Test administrators should be immediately prepared to administer a test when the student arrives. If tests are being administered via computers, each computer should be ready for the student to whom it assigned. Making students wait for their computer to be set up is likely to result in excess noise, whereas a computer lab set up and ready for test administration helps to make it clear to students that the test is important.
- Make sure that there is minimal noise and good lightening in the test setting. Conducting assessments in locations where students are likely to be disturbed by noise or movement can negatively impact student performance, this can be especially problematic when administering timed tests (e.g., Curriculum Based Measures). Remember students take cues from the choices that the adults in the environment are making. Thus, if students notice that tests are being administered in noisy locations and/or teachers and staff are not themselves being quiet near test settings, they are unlikely to taking the testing seriously.
- Monitor test administration. Teachers should remain in the testing room during testing, walking around the classroom monitoring students. Again this will demonstrate to students the importance of the test. Should teachers leave to take a break during test administration, students may take this as a sign that the test is not important.
Dr. Ardoin’s research interests include the application of principles of applied behavior analysis within classroom settings. He applies these principles not only to developing classroom and individual student behavioral interventions, but also to developing academic skill interventions and assessment materials. Much of his current research employs eye-tracking procedures in order to observe the reading behaviors engaged in by students when reading and how those behaviors are altered as a function of intervention. In addition to sharing this knowledge base with graduate students in school psychology through classwork and collaborative research, Dr. Ardoin teaches courses that make up the course sequence offered by UGA and approved by the Behavior Analyst Certification Board towards BACB eligibility requirements.