SOTEL

Melbourne CSHE Scholarship of Technology Enhanced Learning – a digital education network Hub

Demonstration of how ChatGPT handles a common assessment task​

Example: A 3-part written self-reflection assignment of a 12-week engineering project, simply copy and pasted into ChatGPT3.5 with no additional ‘prompt engineering’ – results: https://go.unimelb.edu.au/34e8  

While the resulting ChatGPT generated ‘self-reflections’ are relatively generic in nature it would not take much tweaking to make these results believable as a self-reflection from a student on a semester group project experience!

Assessment Outline: ASClinic_A08_Brief_Self_Reflection_copy



Launching the TEL Community of Practice

The first session (Webinar) of the Technology Enhanced Learning (TEL) Community of Practice at the University of Melbourne (with 196 ASCILITE members) was a ‘Meet the Community’ virtual event where the ASCILITE Executive introduced themselves and outlined the activities of the ASCILITE Society. Cochrane, T., Cowling, M., Huber, E., Schier, M., Gregory, S., Jones, H., Vanderburg, R., & Barker, S. (2024). Meet the ASCILITE Community 2024 (Version 1). The University of Melbourne. https://doi.org/10.26188/25404487.v1

See more info at: https://ascilite.org


Growing the TEL Community Of Practice/Network in 2024

This program aims to mobilise and grow the TEL community across the University (https://blogs.unimelb.edu.au/sotel/) through 1 hour webinar series culminating in the ASCILITE2024 Conference in December hosted by the University of Melbourne (https://2024conference.ascilite.org).Recordings shared on UniMelb Figshare for asynchronous viewing after the Webinars.

  1. Meet the ASCILITE Community – 8th March 12-1pm
    • Find out about the ASCILITE Executive and ASCILITE activities
  2. SoTEL Showcase 15th March 12-1pm Philippa Marriott
    • Hear from practitioners of TEL
  3. Writing submissions for the ASCILITE conference – 12th April 12-1pm
    • Tips for writing successful TEL conference submissions
  4. SoTEL Symposium/Showcase (Virtual PechaKucha’s of TEL practice) – 19th April 12-1pm
    • Showcase of TEL practice in lively Pecha Kucha format
  5. Introduction to ASCILITE Publications (APUBS) and Peer Review – 10th May 12-1pm
    • Tips for submitting and Peer review of ASCILITE submissions
  6. SoTEL Trendsetter/Showcase – 17th May
    • Hear from practitioners of TEL
  7. SoTEL Trendsetter/Showcase – 7th June
    • Hear from practitioners of TEL
  8. Introduction to CMALT professional accreditation – 14th June 12-1pm
    • What is the Certified Member of the Association for Learning Technology and how do I get it?

Designed for: Academic and professional staff interested in TEL Duration: A series of webinars throughout the year (1-hour webinars) Delivery: Starts in March; online

Registration Page: https://melbourne-cshe.unimelb.edu.au/pd/teaching-learning-and-assessment/tel-network


2023 SoTEL Network Recap

While we didn’t update the SoTEL Blog regularly in 2023 we were busy – here’s a summary of some of the SoTEL Network activities throughout 2023 as we move into 2024:


SoTEL Symposium 2023 (Reimagined)

We reimagined the 2023 SoTEL Symposium as a series of Trendsetter (Keynote) presentations over a number of weeks, alongside submitted presentation abstracts published in PJTEL. It turned out that this was fortuitous as New Zealand suffered major flooding and Internet outages during February – February seems to have become the annual ‘disaster’ month for NZ due to climate change!

See the Figshare Trendsetter presentation recordings below:


ChatGPT: Why I don’t fear our new AI overlord

Charles Sevigny- Associate Professor, Anatomy and Physiology

As artificial intelligence technologies continue to evolve, chatbots such as ChatGPT are finding new applications in fields ranging from customer service to language translation. In education, ChatGPT has been gaining attention as a tool that could assist students in their studies. While it presents numerous benefits, such as instant access to information and personalised feedback, it also carries potential risks, including the possibility of students using it to cheat on exams or assignments. In this article, we will explore the ways in which these risks can be mitigated through the crafting of thoughtful and strategic exam questions.

Shamefully, I have engaged in the now-ubiquitous trend of utilizing ChatGPT to write my opening paragraph. This is not to make a point, but simply because I am lazy and pressed for time- traits which I believe will be shared across most students who elect to engage in ChatGPT-facilitated academic misconduct simply to scrape by in their subjects. But before we raise the rapier of trepidation and return to holding exams on stone tablets, I’d like to share my experience of what this tool can and can’t do effectively, for good or for evil.

In its current state it is not a tool for academic excellence, at least not to the standard expected in tertiary education. It does, however, do some things reasonably well.

  • ChatGPT can write a serviceable, albeit derivative, Nordic Noir screenplay about two Physiologists falling in love despite the dark shroud veiling their ventricular myocardia. (Credit to Mr Andrew Hammond for his tireless exploration of this critical trope).
  • It helped with the HTML5 on my Canvas page to eliminate annoying white space and misaligned buttons. Not only did it identify the problem, and teach me a bit about it, it also happily repaired it for me and delivered me fresh code. Its unsurprising penchant for writing and correcting code allowed it to excel on the Google coders exam, resulting in a starting salary of over US$174,000. Fortunately, Academics’ jobs are not at risk as ChatGPT was not attracted by our salary package.

To assess the risk of students using this tool to generate and plagiarise answers for assignments and tests, I ran my exam from last semester through to see how it would cope. My experiences and observations are in the context of Biomedical Science, but some may be applicable elsewhere. In short, it failed the exam.

What ChatGPT can’t do (yet):

  • View an image– by asking students to interpret a graph, diagram or any other image, Chat GPT is instantly rendered helpless.
  • Assess the accuracy of its sources– Chat GPT’s source for information is ‘the internet’. Not a curated, verified internet, but the whole, misinformation-filled internet. As a result, many questions it either got completely incorrect, or tried to cobble together and justify a range of information that ultimately contradicted itself. It cannot match the level of critical analysis we expect from our students.
  • Cite references- ChatGPT will provide you with very believable references. All of them completely fabricated.
  • Interpret data or extrapolate- Give ChatGPT a data set, ask it to interpret the data in a certain context, or predict what may happen next, and you are at risk of breaking the internet. It has no capacity to tell you anything that hasn’t already been solved or written down somewhere.
  • Know what we covered in class- While there may be several answers to a given question, the one I am looking for is derived from the material delivered in class. A simple “consider the two mechanisms we discussed in lecture 12…” will thwart a student’s ability to plug and copy. A student may then proceed to tell ChatGPT everything which was covered in class, but at that point, the student has actually learned something and I’m fine with that.
  • Get hypothetical: Inventing a hypothetical situation/disease/alien creature and asking students to apply their understanding to make a prediction is an old favourite of mine. It is not a favourite of ChatGPT, which is incapable of knowing the answer to something you just completely made up.

You may find that you are already doing most of these things for S/LAQs for a similar reason. During the pandemic, we learned to write questions that couldn’t simply be ‘Googled’. While ChatGPT may compile sources and write to a high grammatical standard, it is still limited by the same database as any search engine.

Despite being on the borderline of pass/fail for my second-year exam, it completely fell apart when attempting to answer the caliber of questions set for third-year subjects. While the above techniques still apply, requiring understanding only held by experts in their field had the generated answer riddled with errors and occasionally completely invented principles.

The higher the complexity of information, or the higher it sits on Bloom’s, the worse ChatGPT will perform.

In conclusion, I still sleep easy. While AI technology will continue to evolve, in its current state it will not be receiving a degree from this University. Our best defense against ChatGPT being used for evil is to continue crafting, in its own words, “thoughtful and strategic exam questions”.

Share you experience in crafting questions only a human can answer in the comments!



@MelbCSHE #XRBootCamp 2022 Summary @Toddstretton @Aiello_Stephen https://doi.org/10.26188/20103530.v1 and https://doi.org/10.26188/20113523.v1

The Immersive Reality BootCamp ran over 4 sessions 20th-23rd June 2022

Notes and links to resources are available at:

Recording of the Introductory Webinar is available at:

Recording of the Expert Panel Discussion is available at:

  • Cochrane, Thomas; SEVIGNY, CHARLES; Stretton, Todd; Aiello, Stephen; LOVERIDGE, BENJAMIN; Birt, James; et al. (2022): Immersive Reality BootCamp Panel. University of Melbourne. Media. https://doi.org/10.26188/20113523.v1

Photos from the DLH HMD Workshop on 22nd June:

Photos from the Arts Digital Studio CAVEs on 23rd June:



Number of posts found: 74

SoTEL Network Contributors