TA-SIG webinar: How to ensure robust assessment in the light of Generative AI developments
Date: Wednesday 2 October 2024
Time: 5:00 – 6:00pm AEST
More information and registration here
Have you been wondering how robust your assessments are against AI? This session will report on large scale research carried out by The Open University, funded by NCFE, between March and July 2024. The research looked to identify the most and least robust assessment types to be answered by Generative AI (GAI), enable some comparison across subject disciplines and levels, and to assess the effectiveness of a short training programme to upskill educators in recognising scripts containing AI-generated material. A mixed-methods approach involving quantitative and qualitative data considered the results of marking from 944 answers (representing 59 questions from 17 different assessment types, 17 disciplines and 4 FHEQ levels).
The research team will share the results including the performance of GAI across a range of different assessments and the impact of training on markers. They will suggest how assessment can be made more robust in light of GAI developments and recommend how higher education institutions might adopt AI-informed approaches to learning, teaching and assessment.
Questions and interaction are welcome.
Presented by: Liz Hardie and Kieran McCartney (Open University, UK) |