Contact Us
FlexClassroom-2

Want to learn more about Riverside?      Contact Your Local Assessment Consultant

ELL Clinical and Special Education Assessments

Go Beyond The Standard Score: Validating Your Data when Assessing ELs

Dr. Pedro Olvera
Written By Dr. Pedro Olvera
On Jul 15, 2022
9 minute read

Considering context is essential to a valid EL evaluation.

The assessment of English learners (EL) has been a challenge for psychologists and educators for almost a century (Sanchez, 1934).  Trying to differentiate between a learning difference or disorder requires that evaluators collect, synthesize, and analyze multiple sources of information while trying to understand the roots of the individual’s learning difficulties.  However, because of complex federal, state, and district mandates or simply lack of training in the assessment of culturally and linguistically diverse (CLD) learners, there is a temptation to superficially interpret assessment results at the standard score level (i.e., the tip of the iceberg) without going deeper into the many probable causes for the lack of academic progress.  However, looking at the student holistically and within the context of cultural and linguistic variables, educational experience, and intervention history will provide the examiner with a more comprehensive outlook of the child.  Several psychoeducational assessment frameworks have been proposed highlighting the importance of considering language proficiency, using culturally appropriate tools, and integrating multiple data sources (see Olvera & Gomez-Cerrillo, 2011; Ortiz, 2019; Sanchez et al., 2013). The frameworks have proven helpful and insightful for assessing ELs. 

 

However, to make nondiscriminatory eligibility determinations, it is essential that examiners not only gather multiple sources of information but also use that data to validate standardized cognitive assessment scores.  Assessment validation used in this context refers to the process of ensuring the accuracy and quality of the obtained data for educational purposes, including decision-making.  For example, not considering English language proficiency and potential acculturation variables during the interpretation process can obscure the true potential of ELs, consequently causing misguided data interpretations by well-meaning educational teams.  To provide a framework for validating the cognitive assessments results of ELs, this writer proposes that examiners thoroughly investigate and analyze the following: 


Copy of Copy of Copy of Copy of A Fireside Chat with Rutherford County Best Practices to Scale Response to Intervention (RTI) Strategies (2240 × 1400 px) (1400 × 1550 px) (1400 × 1575 px) (1500 × 1575 px)
 
 

Understanding how to apply each step will help the examiner validate the assessment and have confidence when making eligibility determinations.  A brief review of each of the factors will ensue: 

  1. Context and background information (personal and familial).  Examiners should conduct a thorough clinical interview with parents in their native language.  Information that can prove essential in eligibility determinations includes, but is not limited to, delayed milestones, speech and language delays, family history of disabilities, and uncovering any neurological or medical issues that may be impacting learning and social-emotional behavior. 

  2. History of the language of instruction and types of academic support.  Considering language modalities of instruction (i.e., bilingual, two-way immersion, structured immersion, etc.), pull-out English as a Second Language (ESL) or instruction in their primary language in a foreign country is also important in this process.  This analysis of the language of instruction will help the examiner understand if the student had a history of English language developmental support to access the curriculum.  Without the appropriate support in the second language, academic difficulties may be more reflective of a difference rather than a disorder.  In addition, the examiner can determine if assessing in a language other than English would be appropriate for this student, given the history of linguistic academic exposure in that particular language. 
      
  3. A thorough review of evidence-based interventions.  In teasing out differences or disorder, the examiner should investigate if the student had access to evidence-based interventions.  In addition, whether the implementation of the intervention occurred with fidelity and analyzing progress monitoring data.  This step can also help the examiner evaluate whether the student's academic or social-emotional needs are due to underlying disabilities or academic skills deficits. 

  4. Assessment of academic progress relative to similar peers.  Given that most standardized assessments do not include ELs or bilingual students in their standardization samples, outcome scores may not reflect the construct you intended to assess.  Instead, low scores may be related to insufficient linguistic proficiency to understand the directions or questions required to complete the test items (i.e., construct irrelevant variance). Thus, it is essential to compare the student to peers of similar linguistic, cultural, and academic experience to determine if performance on such assessment results is similar to students that share their background.  Other sources that can be gathered at this stage include report cards, language proficiency exams, state tests, and progress monitoring data obtained from interventions. 

  5. Assessment of linguistic proficiency and access to the English curriculum.  Assessing language proficiency and dominance will help the examiner determine if the student has sufficient language to access the curriculum's linguistic demands and also to choose the most appropriate language to assess the student.  Assessments like the Woodcock-Muñoz Language Survey® (WMLS™ III) and the WJ IV Tests of Oral Language evaluate the student's Cognitive Academic Language Proficiency (CALP) and provide a Comparative Language Index (CLI), which helps the examiner determine the examinee’s dominance in both English or Spanish.  

  6. Selection of culturally and linguistically appropriate tools and procedures to answer the referral question(s).  Utilizing culturally and linguistically appropriate tools is required not only for nondiscriminatory assessment (IDEA, 2004) but also to help decrease the cultural and linguistic demands of standardized assessment tools.  Understanding the student's background (# 1), surveying the student's language of instruction (# 2), evaluating interventions (# 3), and determining language proficiency and dominance (# 4) will provide a foundation for understanding which assessment tools would be the most appropriate for the student.  

  7. Consideration of acculturation factors (cultural validity).  To ensure that the assessment tasks and underlying assumptions are culturally valid (Basterra et al., 2011) for the student, the examiner should spend time interviewing the family and the student (# 1) and thoroughly reviewing the educational history (# 2).  Information gathered at this stage will help the examiner understand the student's level of acculturation to both the home and host country, uncover cultural patterns of thinking that are not directly measured by the test, and uncover other nuances that may impede the administration of the assessment (i.e., communication styles, time-oriented tests, and other latent factors).  The examiner may also self-reflect on personal biases that may interfere with objective data interpretation.  

  8. Incorporation of accommodations In addition to selecting culturally and linguistically appropriate tools, the examiner is advised to consult the test publisher manual to determine suggested testing accommodations for ELs.  Assessing the student in a language in which they have limited proficiency may decrease verbal semantic fluency rates (Portocarrero & Burright, 2007), impede the ability to understand oral directions required to complete the task (Cormier et al., 2014), or create feelings of frustration caused by an English receptive vocabulary word bank while having limited English expressiveness (Gibson et al., 2012) thereby interfering with testing performance.  Thus, accommodations like allowances for extra time on select tests, access to an interpreter, allowing to respond in both languages, and repetition of certain test items may support those with limited English proficiency.  However, the examiner is advised to consult the testing manual and report all accommodations in the validity section of the psychoeducational report. 

Once the above factors and their impact on the obtained scores have been considered, the examiner, including the eligibility team, will need to determine if one more of the factors had a positive or negative impact on the testing results.  The following scenarios can help apply this framework: 

 

Scenario #1:  

Copy of Untitled Design (1)

Situation: A child obtains significantly low phonological processing scores on a standardized assessment. 

Validation Process: The eligibility team should consider whether a lack of access to instructional interventions (#s 2 & 3) or limited English proficiency (#5) can explain low academic performance.  If so, it may follow that low standardized test scores reflect a lack of access to linguistically-relevant academic support rather than a disability. 

 

Scenario #2: 

Untitled design (9)Situation: The student has limited English proficiency, and the ensuing test results indicated significantly low scores on standardized assessment results. 

Validation Process: Did the examiner provide access to testing accommodations (i.e., providing an interpreter and extra time to process verbal instructions) appropriate for limited English proficiency (#8)?  Consequently, lack of access to these accommodations may have hindered the student’s ability to access the language required to understand how to complete the test items.  Thus, in this example, like the one above, low test performance may be a result of construct irrelevant variance (i.e., language proficiency) rather than a cognitive deficit. 

 

Considering the abovementioned factors will involve more time given the in-depth analysis; however, taking the time to review each of them will provide the examiner with confidence with the data interpretation ELs beyond the tip of the iceberg and even uncover additional sources of support that may be required to help the child access their curriculum.  

 

Watch: EL Case Study on Data Validation

 

References

Basterra, M. d. R., Solano Flores, G., & Trumbull, E. (Eds.). (2011).  Cultural Validity in Assessment: Addressing 

Linguistic and Cultural Diversity.  Routledge.

Cormier, D. C., McGrew, K. S., & Ysseldyke, J. E. (2014).  The influences of linguistic demand and cultural

loading on cognitive test scores.  Journal of Psychoeducational Assessment, 32(7), 610-623.  

https://doi.org/10.1177%2F0734282914536012

Gibson, T. A., Oller, D. K., Jarmulowicz, L., & Ethington, C. E. (2012).  The receptive–expressive gap in the 

vocabulary of young second-language learners: Robustness and possible mechanisms.  Bilingualism: 

Language and Cognition, 15(1), 102-116.  doi:10.1017/S1366728910000490

Individuals with Disabilities Education Act (IDEA) regulations, 34 CFR Part 300 (2004).

Olvera, P., & Gomez-Cerrillo, L. (2011). A bilingual (English & Spanish) psychoeducational assessment

MODEL grounded in Cattell-Horn Carroll (CHC) Theory: A cross battery approach.  Contemporary School Psychology, 15, 117-127.  https://doi.org/10.1007/BF03340968

Ortiz, S. O. (2019).  On the measurement of cognitive abilities in English learners.  Contemporary School 

Psychology, 23(1), 68-86.  https://psycnet.apa.org/doi/10.1007/s40688-018-0208-8

Portocarrero, J. S., & Burright, R. G. (2007). Vocabulary and verbal fluency of bilingual and monolingual

college students.  Archives of Clinical Neuropsychology, 22(3), 415-422. 

https://doi.org/10.1016/j.acn.2007.01.015

Sanchez, G. I. (1934).  The implications of a basal vocabulary to the measurement of the abilities of bilingual 

children.  The Journal of Social Psychology, 5(3), 395-402. https://doi.org/10.1080/00224545.1934.9921607

Sanchez, S. V., Flores, B. B., Rodriguez, B. J., Soto-Huerta, M. E., Villarreal, F. C., & Guerra, N. S. (2013). A case 

for multidimensional bilingual assessment.  Language Assessment Quarterly, 10(2), 160-177.  

https://doi.org/10.1080/15434303.2013.769544

 

Recommended Articles:

Submit a Comment

Subscribe to our newsletter

Click here to subscribe

Stay up to date