Boosting data comparability when evaluating cross-cultural education
Evaluation is an integral part of the educational process in many countries. However, evaluation policies and practices will never fit every situation, as they take place at different levels and use diverse practices. “We aimed to unfold the complexity of educational evaluation – including at system, school, classroom and student levels. So we looked at numerous countries and called on solid cross-cultural research methods,” explains Jamis He Jia, coordinator of the EU’s POEECCP project(opens in new window) at the DIPF / Leibniz Institute for Research and Information in Education(opens in new window), in Germany. The project focused on large-scale educational assessment, notably the Organisation for Economic Co-operation and Development(opens in new window) (OECD) Programme for International Student Assessment (PISA), which measures 15-year-old students’ reading, mathematics and science literacy every 3 years in dozens of participating countries. PISA data studied by the project included students’ academic performance and their self-reported wellbeing (e.g. sense of belonging in school) and perceptions (e.g. teaching practices), teacher self-reported classroom practices (e.g. grading), plus school principal self-reported school contexts and evaluation policies and practices.
Assessing data comparability
The first study(opens in new window) looked at the methodological issue of cross-cultural comparisons. It explored self-reported data comparability from 12 PISA participating countries, using advanced psychometric methods(opens in new window), in other words, the statistical analyses of the collected data. “Self-reported data sometimes show poor comparability across cultures, due to cultural and linguistic differences. A good example is learning motivation, where students from collectivistic cultures typically rate themselves lower than students from individualistic countries,” says He Jia, who has a doctorate in Cross-Cultural Psychology from Tilburg University(opens in new window) in the Netherlands. She explains this is due to not solely genuine differences in motivation, but also cultural differences in self-presentation styles. While the former group shows a modesty bias (avoiding expression of strong opinions), the latter group shows a strong self-enhancement tendency. “When data are not entirely comparable across all countries, educational researchers and practitioners should work with psychometricians and use innovative statistical approaches to uncover cultural similarities and differences,” adds He Jia, who was supported by the Marie Skłodowska-Curie Actions programme(opens in new window).
Influence of background
The second study(opens in new window) zoomed in on students of immigrant and non-immigrant backgrounds in Germany, Italy and Spain, three countries with different approaches to multiculturalism and varying immigration compositions. It demonstrated how the links between the same practices, such as teachers’ grading, and student outcomes could vary significantly depending on students’ backgrounds. According to He Jia: “Educational policymakers should therefore give teachers and schools some autonomy for their internal evaluation and classroom assessment. Teachers must adapt their practices to ensure inclusion and equity for different groups of students.” Project research findings have been widely shared with education specialists plus large-scale survey coordinating bodies like the OECD, including through university workshops and conferences in Germany and the Netherlands. “Our aim is to promote methodological rigour when doing cross-cultural educational assessment,” notes He Jia. She adds that the project experience, covering different methods to address and enhance data comparability in this field, could strengthen theories on educational effectiveness. It could also help countries to draw up and implement targeted and better educational policies.