The assessment of English learners (EL) has been a challenge for psychologists and educators for almost a century (Sanchez, 1934). Trying to differentiate between a learning difference or disorder requires that evaluators collect, synthesize, and analyze multiple sources of information while trying to understand the roots of the individual’s learning difficulties. However, because of complex federal, state, and district mandates or simply lack of training in the assessment of culturally and linguistically diverse (CLD) learners, there is a temptation to superficially interpret assessment results at the standard score level (i.e., the tip of the iceberg) without going deeper into the many probable causes for the lack of academic progress. However, looking at the student holistically and within the context of cultural and linguistic variables, educational experience, and intervention history will provide the examiner with a more comprehensive outlook of the child. Several psychoeducational assessment frameworks have been proposed highlighting the importance of considering language proficiency, using culturally appropriate tools, and integrating multiple data sources (see Olvera & Gomez-Cerrillo, 2011; Ortiz, 2019; Sanchez et al., 2013). The frameworks have proven helpful and insightful for assessing ELs.
However, to make nondiscriminatory eligibility determinations, it is essential that examiners not only gather multiple sources of information but also use that data to validate standardized cognitive assessment scores. Assessment validation used in this context refers to the process of ensuring the accuracy and quality of the obtained data for educational purposes, including decision-making. For example, not considering English language proficiency and potential acculturation variables during the interpretation process can obscure the true potential of ELs, consequently causing misguided data interpretations by well-meaning educational teams. To provide a framework for validating the cognitive assessments results of ELs, this writer proposes that examiners thoroughly investigate and analyze the following:
Understanding how to apply each step will help the examiner validate the assessment and have confidence when making eligibility determinations. A brief review of each of the factors will ensue:
Once the above factors and their impact on the obtained scores have been considered, the examiner, including the eligibility team, will need to determine if one more of the factors had a positive or negative impact on the testing results. The following scenarios can help apply this framework:
Scenario #1:
Situation: A child obtains significantly low phonological processing scores on a standardized assessment.
Validation Process: The eligibility team should consider whether a lack of access to instructional interventions (#s 2 & 3) or limited English proficiency (#5) can explain low academic performance. If so, it may follow that low standardized test scores reflect a lack of access to linguistically-relevant academic support rather than a disability.
Scenario #2:
Validation Process: Did the examiner provide access to testing accommodations (i.e., providing an interpreter and extra time to process verbal instructions) appropriate for limited English proficiency (#8)? Consequently, lack of access to these accommodations may have hindered the student’s ability to access the language required to understand how to complete the test items. Thus, in this example, like the one above, low test performance may be a result of construct irrelevant variance (i.e., language proficiency) rather than a cognitive deficit.
Considering the abovementioned factors will involve more time given the in-depth analysis; however, taking the time to review each of them will provide the examiner with confidence with the data interpretation ELs beyond the tip of the iceberg and even uncover additional sources of support that may be required to help the child access their curriculum.
References
Basterra, M. d. R., Solano Flores, G., & Trumbull, E. (Eds.). (2011). Cultural Validity in Assessment: Addressing
Linguistic and Cultural Diversity. Routledge.
Cormier, D. C., McGrew, K. S., & Ysseldyke, J. E. (2014). The influences of linguistic demand and cultural
loading on cognitive test scores. Journal of Psychoeducational Assessment, 32(7), 610-623.
https://doi.org/10.1177%2F0734282914536012
Gibson, T. A., Oller, D. K., Jarmulowicz, L., & Ethington, C. E. (2012). The receptive–expressive gap in the
vocabulary of young second-language learners: Robustness and possible mechanisms. Bilingualism:
Language and Cognition, 15(1), 102-116. doi:10.1017/S1366728910000490
Individuals with Disabilities Education Act (IDEA) regulations, 34 CFR Part 300 (2004).
Olvera, P., & Gomez-Cerrillo, L. (2011). A bilingual (English & Spanish) psychoeducational assessment
MODEL grounded in Cattell-Horn Carroll (CHC) Theory: A cross battery approach. Contemporary School Psychology, 15, 117-127. https://doi.org/10.1007/BF03340968
Ortiz, S. O. (2019). On the measurement of cognitive abilities in English learners. Contemporary School
Psychology, 23(1), 68-86. https://psycnet.apa.org/doi/10.1007/s40688-018-0208-8
Portocarrero, J. S., & Burright, R. G. (2007). Vocabulary and verbal fluency of bilingual and monolingual
college students. Archives of Clinical Neuropsychology, 22(3), 415-422.
https://doi.org/10.1016/j.acn.2007.01.015
Sanchez, G. I. (1934). The implications of a basal vocabulary to the measurement of the abilities of bilingual
children. The Journal of Social Psychology, 5(3), 395-402. https://doi.org/10.1080/00224545.1934.9921607
Sanchez, S. V., Flores, B. B., Rodriguez, B. J., Soto-Huerta, M. E., Villarreal, F. C., & Guerra, N. S. (2013). A case
for multidimensional bilingual assessment. Language Assessment Quarterly, 10(2), 160-177.
https://doi.org/10.1080/15434303.2013.769544