Recent Projects at The Language Testing Research Centre (LTRC)
About the LTRC
The Language Testing Research Centre (LTRC) in the School of Languages and Linguistics, Faculty of Arts is recognised as a world-leader in research in the areas of language assessment and language program evaluation. The LTRC has an established strong track record of attracting grant and consultancy funding both nationally and internationally.
The LTRC serves as a global leader promoting ethical and exemplary research and practice in language testing and assessment. Our aim is to build professional expertise and public understanding of the field and to advocate for appropriate testing practices on behalf of stakeholders. We provide consultancy services, policy advice and public advocacy.
Recent Projects
The LTRC team has been working on quite a number of projects in the past year or so. I would like to share with you a few projects in which I was involved, including the placement test review project, the Aptis writing project, the LTTC linking study, and the Pearson test preparation project. I will briefly explain each below.
- Placement Test Review Project
The language placement tests have been in use since 2014 and are a compulsory pre-requisite for all new students since late 2017. With this change in policy, it was time to review the tests and their implementation to ascertain that they are functioning as intended. In 2018, a proposal was accepted by the Faculty, the School of Languages and Linguistics and the Asia Institute to fund this review, and in 2019 work began on the review of 7 placement tests (Chinese, French, German, Indonesian, Italian, Japanese and Spanish). Included in the review are the statistical properties of the tests, statistical analysis of background data and test score data, a review of the current cut scores and stakeholder perceptions. Several research reports were prepared to summarise the major findings. This study has implications for the evaluation of local language assessment projects.
- Investigating the discourse produced and score levels B2.2 to C2 on the Aptis Advanced Writing Test
Funded by the British Council, this study explored the features of test-takers’ writing samples on the two tasks in the Aptis Advanced writing test, that is, email response and website writing that distinguished the three levels of B2.2 (upper intermediate), C1 (advanced) and C2 (proficient) on the CEFR. The study consisted of a qualitative and quantitative phase. During the qualitative phase, we conducted focus groups with ESL experts and Aptis raters where they commented on the features of the writing samples at the three CEFR levels in focus. During the quantitative phase, we performed discourse analysis of the email response and website writing samples. In view of the findings of this study, we provided recommendations for the possible revisions of the rating scales for the two tasks in the Aptis Advanced writing test. This study offers insights into the development of rating scales for learners at the upper end of the language proficiency scale. The final report of this project is available here.
- The LTTC linking study
Funded by Language Teaching and Testing Centre (LTTC), Taiwan, this study aimed at linking Part 1 of the General English Proficiency Test (GEPT) writing subtest at the intermediate and high-intermediate level to the Common European Framework of Reference for Languages (CEFR, both using a Chinese-English translation task), following the four stages of familiarisation, specification, standardisation, and validation set out in the CEFR manual. In addition, this study also explored, through a think-aloud study, the processes through which the panellists linked the GEPT translation scripts to the CEFR levels. Findings of this study indicate that the translation tasks at the two GEPT levels in focus are well-aligned to their target CEFR levels, though it is recommended that the cut score for the intermediate level be slightly adjusted based on the linking results from this study. In view of our experience in this study, we also provided a few recommendations for future researchers linking translation tasks in language tests to the CEFR. The final report of this project is available here.
- The Pearson test preparation project
Funded by Pearson Education, this study investigated and compared test takers’ preparation practices and strategies on two computerised English speaking tests, that is, the speaking subtest of the Pearson Test of English – Academic (PTE-A) and the College English Test – Spoken English Test Band 4 (CET-SET4) in China. A sequential mixed-methods design was adopted in this study, consisting of a qualitative phase and a follow-up quantitative phase. Our findings indicate that test takers engaged in highly strategic test preparation practices when preparing for the PTE-A speaking subtest. Though most of these practices could be considered as irrelevant to what the test is measuring, they were perceived as highly effective in boosting their test scores on the speaking test. In contrast, test takers’ preparation activities on the CET-SET4 seemed to be more aligned with the improvement of the target speaking constructs. Test takers indicated more positive perceptions of the integrated tasks in the speaking tests largely due to their perceived authenticity. This study further highlights the complexity of test preparation, suggesting that it could be significantly influenced by the language learning and testing context as well as the target test, in terms of, for example, the task formats, test stakes and scoring method.
About the contributor
Dr Jason Fan is Deputy Director and Senior Research Fellow at the Language Testing Research Centre (LTRC), University of Melbourne. His research interests include validity theory and validation of language assessments; development, use and impact of language assessment programmes; and research methods.