April Mcmahon: The hype and reality of ChatGPT: Exploring its impact on teaching and learning
There is a lot of noise, perhaps even a little bit of hysteria, around ChatGPT. AI tools of this sort are becoming more mainstream; and alongside the challenges they may present for some aspects of our teaching and assessment, they also bring opportunities. ChatGPT is not the first new tech, or approach, to have been seen as a major challenge to teaching and learning at the start – but then adopted, adapted and embraced down the line.
Presently, ChatGPT and the like are being presented as ‘the end of assessment as we know it’. Universities and other HEIs have already adapted to a wide range of assessment innovations, before and during the pandemic – marking online, screening for academic malpractice, and a broader range of assessment types are all relatively recent changes. In earlier days, such changes tended to divide enthusiastic early adopters from more concerned colleagues. With time and guidance, we have developed a more balanced view, learning as we go to achieve improvements in inclusivity, quality, and efficiency. No doubt the same lies ahead with ChatGPT.
There is already considerable work across the University supporting us to understand such AI tools, and to help our students appreciate when and how these can validly be used. After all, if other employment sectors and broader society are going to experiment with and use such tools, we need to ensure our graduates are equipped to do likewise – with the benefit of a Manchester critical eye.We can also benefit from the positive features these tools can bring to our own work; and adjust our approaches to assessment to future-proof as far as possible against misuse. This ‘adjust’ part may sound daunting, but we now have a positive opportunity to do something our students are encouraging us to do anyway, and review our assessment practice. Given current concerns about workload for both staff and students, are we assessing too much? Would fewer assessments, more choice and optionality for students, and more formative elements, support their confidence and attainment, while also guarding against the more nefarious aspects of AI?
ChatGPT is just one example of the category of generative AI assistants – ‘assistants’ because they actively do something for the user; and ‘generative’ because given a starting question, they will generate a full answer without further input. The outcome may not be complete, because ChatGPT will only work with what it can find (and won’t tell you if it couldn’t find very much). At the time of writing, it is also poor at providing accurate sources or references to where its information came from. So ChatGPT drafts tend to be starting points that would require quite some extra work to turn into quality answers.
These tools also perform better on short (rather than long, nuanced) answer tasks. As it happens, assessment types that are more authentic, inclusive, and thoughtful, tend to be more robust in the face of tools like ChatGPT. More secure assessments might ask students to reach across topics, modules, or disciplines; or to write from a personal perspective; or present the same knowledge in more than one modality (like an essay / write-up and a linked presentation). Including stepped approaches with a formative element can create greater academic confidence in the student having done the work, as well as greater student confidence through quality, timely feedback – which AI can help us generate.
Generative AI assistants are not all bad. They can be an aid to learning; or guide students through a series of steps. They can provide students (or staff, indeed) with an initial snapshot of available information on a topic. We already have positive examples of our colleagues working with their students to figure out where ChatGPT could give us a head start (with the emphasis on ‘start’), and when it is more likely to be a hindrance than a help. As a learning community, our University can and should help engage positively with AI when appropriate, alongside understanding academic integrity.
A great deal of work is already underway on safeguarding and enhancing assessment in our University. ‘Assessment for the future’ (AfF), led by Professor Gabrielle Finn, is developing a strategy for assessment that is relevant, inclusive and trustworthy and incorporates student voice. AfF is running an extended pilot of an online assessment platform called CADMUS, which supports student outcomes by providing an improved online assessment experience with the assurance of academic integrity, whilst also improving equality of opportunity for diverse student cohorts. This links to our existing research-based work on differential attainment and optionality in assessment.
We have considerable research strength in both AI and assessment and will be drawing upon both – read Professor Mairead Pratschke’s recent talk in the national repository for teaching excellence. Professors Gabrielle Finn and Andy Brass have established a working group on AI and Assessment; Dr Miriam Firth is leading on assessment within the Flexible Learning Project; the Library has produced helpful FAQs; and the Institute of Teaching and Learning are working across the University to develop more detailed guidance on generative AI assistants.
To get involved in these projects or for more information and support, just contact the Institute of Teaching and Learning (firstname.lastname@example.org), who will point you in the right direction (using their knowledge and experience, rather than AI – for the time being).
Professor April McMahon, Vice President for Teaching, Learning and Students.
Given that we are a learning community…I have a lot of colleagues to thank for their contributions to this blog. Many thanks to Craig Best, Gabrielle Finn, Miri Firth, Kim Graakjaer and Judy Williams for pointing out resources, and for helpful discussion and suggestions. And also to ChatGPT, which provided the title for this very blog.