ChatGPT: Why I don’t fear our new AI overlord
Charles Sevigny- Associate Professor, Anatomy and Physiology
As artificial intelligence technologies continue to evolve, chatbots such as ChatGPT are finding new applications in fields ranging from customer service to language translation. In education, ChatGPT has been gaining attention as a tool that could assist students in their studies. While it presents numerous benefits, such as instant access to information and personalised feedback, it also carries potential risks, including the possibility of students using it to cheat on exams or assignments. In this article, we will explore the ways in which these risks can be mitigated through the crafting of thoughtful and strategic exam questions.
Shamefully, I have engaged in the now-ubiquitous trend of utilizing ChatGPT to write my opening paragraph. This is not to make a point, but simply because I am lazy and pressed for time- traits which I believe will be shared across most students who elect to engage in ChatGPT-facilitated academic misconduct simply to scrape by in their subjects. But before we raise the rapier of trepidation and return to holding exams on stone tablets, I’d like to share my experience of what this tool can and can’t do effectively, for good or for evil.
In its current state it is not a tool for academic excellence, at least not to the standard expected in tertiary education. It does, however, do some things reasonably well.
- ChatGPT can write a serviceable, albeit derivative, Nordic Noir screenplay about two Physiologists falling in love despite the dark shroud veiling their ventricular myocardia. (Credit to Mr Andrew Hammond for his tireless exploration of this critical trope).
- It helped with the HTML5 on my Canvas page to eliminate annoying white space and misaligned buttons. Not only did it identify the problem, and teach me a bit about it, it also happily repaired it for me and delivered me fresh code. Its unsurprising penchant for writing and correcting code allowed it to excel on the Google coders exam, resulting in a starting salary of over US$174,000. Fortunately, Academics’ jobs are not at risk as ChatGPT was not attracted by our salary package.
To assess the risk of students using this tool to generate and plagiarise answers for assignments and tests, I ran my exam from last semester through to see how it would cope. My experiences and observations are in the context of Biomedical Science, but some may be applicable elsewhere. In short, it failed the exam.
What ChatGPT can’t do (yet):
- View an image– by asking students to interpret a graph, diagram or any other image, Chat GPT is instantly rendered helpless.
- Assess the accuracy of its sources– Chat GPT’s source for information is ‘the internet’. Not a curated, verified internet, but the whole, misinformation-filled internet. As a result, many questions it either got completely incorrect, or tried to cobble together and justify a range of information that ultimately contradicted itself. It cannot match the level of critical analysis we expect from our students.
- Cite references- ChatGPT will provide you with very believable references. All of them completely fabricated.
- Interpret data or extrapolate- Give ChatGPT a data set, ask it to interpret the data in a certain context, or predict what may happen next, and you are at risk of breaking the internet. It has no capacity to tell you anything that hasn’t already been solved or written down somewhere.
- Know what we covered in class- While there may be several answers to a given question, the one I am looking for is derived from the material delivered in class. A simple “consider the two mechanisms we discussed in lecture 12…” will thwart a student’s ability to plug and copy. A student may then proceed to tell ChatGPT everything which was covered in class, but at that point, the student has actually learned something and I’m fine with that.
- Get hypothetical: Inventing a hypothetical situation/disease/alien creature and asking students to apply their understanding to make a prediction is an old favourite of mine. It is not a favourite of ChatGPT, which is incapable of knowing the answer to something you just completely made up.
You may find that you are already doing most of these things for S/LAQs for a similar reason. During the pandemic, we learned to write questions that couldn’t simply be ‘Googled’. While ChatGPT may compile sources and write to a high grammatical standard, it is still limited by the same database as any search engine.
Despite being on the borderline of pass/fail for my second-year exam, it completely fell apart when attempting to answer the caliber of questions set for third-year subjects. While the above techniques still apply, requiring understanding only held by experts in their field had the generated answer riddled with errors and occasionally completely invented principles.
The higher the complexity of information, or the higher it sits on Bloom’s, the worse ChatGPT will perform.
In conclusion, I still sleep easy. While AI technology will continue to evolve, in its current state it will not be receiving a degree from this University. Our best defense against ChatGPT being used for evil is to continue crafting, in its own words, “thoughtful and strategic exam questions”.
Share you experience in crafting questions only a human can answer in the comments!