Riverside Insights Blog

Selective Testing: Understand Student Performance More Efficiently

Written by Amy Gabel, PhD | Sep 15, 2023 2:13:20 PM

So many tests, so little time...

There’s always a sense of excitement for new beginnings at the start of the school year. But then, the tension creeps back for many of us as we think about a raft of referrals soon to come. Doesn’t it seem like every school year gets more hectic? I’ve always enjoyed conducting evaluations, but let’s face it, there are also so many other things for school psychologists to do to support students, parents, and teachers.

 

Efficiency is Key for us All!

When it’s time to conduct formal, individualized evaluations, we must remember that our tests were designed to be used as standardized tools. Tools that not only help our teams determine eligibility, but also to answer key “WH” questions such as:

    1. “Why” (has the student not responded to intervention during MTSS; is the student still having trouble in reading).
    2. “What” (is next, that is).

Test scores only deliver a part of the picture. Some of the best data we can collect during any standardized assessment pertains to gathering information regarding the conditions under which students perform best versus those where they have trouble. Knowing more about what resources and abilities that a student draws upon to solve problems provides priceless information when communicating results and making intervention suggestions. To put this within more of a cognitive processing or even a “whole child” perspective:

“Is having knowledge sufficient?” Might it be important to know about how a student uses their knowledge, to what extent, and in which situations?”

 

Targeted Assessment Answers Specific Questions

There is an art and a science to assessment. This is true in all areas, including medicine and psycho-educational assessment. Unfortunately, in some of the discussions regarding models of eligibility that have ensued in the field of school psychology, we have heard that ability doesn’t matter, that using PSW sends us down a neuropsychology “rabbit hole,” etc. While it’s true that some students don’t receive the curriculum or intervention that they need, this does not negate the fact that the brain and the process of learning is complex and dynamic. When initial MTSS interventions haven’t yielded success, we gain insight into what doesn’t work, but we still haven’t gotten to the “why?”

 

Selecting tests that are related to the known task demands for various learning activities helps us develop a roadmap for our investigations. Even though many of the students we work with like the one-to-one attention they receive during an evaluation, they don’t necessarily want to be taking tests forever and miss lots of class time, so we already know that we don’t want to give an exhaustive number of tests as in some neuropsychological evaluations. Rather, based on task demands and specific student-centered referral questions that are developed prior to testing, the selective testing model provides the efficiency we need without sacrificing accuracy in understanding performance.

 

Putting the Idea into Action

In a previous blog, Dr. Sarah Holman described how the Woodcock-Johnson® IV (WJ IV) is designed to support selective testing processes. As evaluators, how do we operationalize that? When I begin an assessment, I usually start by gathering the following information:

      • What has already been done?
      • What unique social, language, and cultural issues should I consider? Or, what is the context within which the student learning and developing?
      • Why was the referral for testing was made? What are the areas of suspected disability?
      • What does the team hope to learn from the data? (Besides just whether the student is eligible!)

Until I have a solid grasp of these factors, I don’t have the essential foundation to develop a student- and consumer-focused assessment batteries. Wait, aren’t these the same? In some ways, yes, but in others, no. When completing evaluations in the schools, child advocacy is obviously key. However, evaluators should always consider the family and teachers as primary consumers of the assessment data and reports. Thinking of these consumers is critical, as they are on the “front lines” regardless of whether a student is eligible for specialized services, as these key adults are the individuals that will be delivering the “what’s next” following an evaluation.

 

It can seem overwhelming when there are upwards of a dozen subtests on various measures included the WJ IV suite. However, it doesn’t have to be! Using our knowledge of research pertaining to the learning and the development of reading, writing, math, and speaking skills, we can use specific components of these large test batteries to address specific questions. As mentioned previously, how we develop, phrase, and answer these questions in our reports can focus more on “simple” eligibility determination or they can more directly relate to understanding how a student solves problems. For example, when a student is referred because of math difficulties, we can develop our battery and focus our report in different ways, such as:

      1. Does X have the characteristics of a student with a specific learning disability in math?
      2. Why does X experience difficulty generalizing knowledge of specific math facts to word problems?
      3. Does X have trouble with tasks that require considerable sustained or focused attention?
      4. How do demands for speedy performance impact X’s performance in math?

The more we know about the task demands associated with learning and the characteristics associated with various learning disabilities guides us to a more intervention-focused report. For example, Geary, et. al, 2011 identified these brain-based math reasoning disabilities: 1) procedural 2) semantic, and 3) visuospatial. When we consider the complex task demands associated with math reasoning, multiple networks within the cognitive system can impact performance in the brain system. Conceptualization of applied math problems can involve processes involving language, retrieval, speed, and the ability to distinguish relevant and irrelevant information. For example, NCTM and other organizations have some resources that delineate task demands in mathematics. We need to balance our understanding of what tests measure and how it relates to task demands as we develop our batteries.

 

Thinking back to our referral questions, while the first question can lead us to a satisfactory report, the second through fourth questions may help us frame interventions better. All can guide our test selection on the WJ-IV.

In our math example, potential tests from the WJ-IV to start with might include measures of:

 

 

Keeping in mind that not all the above tests might be needed because of what you know from MTSS, pending the results of an initial inquiry, follow-up could be needed with Analysis-Synthesis (if math facts are low and Gf/GIA is low, for example). Other measures might be necessary to investigate social and behavioral and attention/executive function as appropriate for the student. Read more on the Core-Selective Evaluation Process Applied to Identification of a Specific Learning Disability in the WJ-IV Service Bulletin #8.

 

If we consider other diagnostic endeavors, this process is akin to what evaluators in all fields do. For example, when your veterinarian needs to evaluate what is happening with an aging animal, they take your observations, general bloodwork, and more comprehensive testing for specific causes as hypotheses are ruled in or out. Applied to psycho-educational evaluation, using the test data, including scores and observations, from multiple forms of data collection helps us to understand the process of learning so that we may craft instructional strategies. Keeping in mind that an intervention-based report focuses not only on areas of need, but also areas of strength upon which to build, we can produce useful reports that describe the student, and not the test(s).

 

I’ve always felt that there is no better compliment than when a parent says, “you really get my kid!” Or teachers, who have been providing supports thank you for confirming that they are on the “right track” with preliminary interventions and now have even more strategies to add to their repertoire.