Ninth Thing: Generative AI

Research is long and life is short. The emergence of generative AI (GAI) tools promises to help organise and synthesise the mountains of information with which researchers are faced. These tools claim to scan databases, extract relevant research, summarise articles and polish prose with the press of a button. It sounds incredible (and maybe a little unsettling). This post, written by Justin Park, looks at some popular tools and evaluate their outputs.  

A disclaimer: everything in this post is derived from the author’s own experience and use of these tools as of July 2023. The capabilities of these tools will change…and not always for the better. You may want to familiarise yourself with a broad overview of how these tools work, where they get their data and some of the ethical issues as well as some info on prompt engineering. And remember, graduate researchers are responsible for understanding the University’s Academic Integrity guidelines around generative AI tools.

Putting GAI tools to the test

I compared ChatGPT, Bing, Elicit, Scholarcy, QuillBot and Scite in brainstorming research questions, producing literature reviews, summarising articles, and editing writing.  

Elicit (free), “The AI Research Assistant,” offers to conduct literature reviews, brainstorming sessions, and more. Like Bing (free), Elicit runs on a version of OpenAI’s ChatGPT (free). While Bing and ChatGPT can provide erroneous or fabricated sources, Elicit generally uses papers contained in Semantic Scholar and Microsoft’s Academic Graph. Similarly, Scite (free trial for 7 days) offers to provide a literature review as well as help brainstorm research questions. Scite runs on its own system and claims it will not fabricate sources. Scholarcy (free trial) was unclear as to what kind of software it uses, but it was generally accurate. 

Task One: Brainstorming 

For brainstorming research questions, Elicit, Bing, ChatGPT, and Scite offer generic but helpful suggestions. They generate a wide range of approaches almost instantaneously. For example, to a general research question about developing student writing in STEM (Science, Technology, Engineering and Mathematics) subjects, each tool produced multiple approaches along the axes of pedagogy, professional development, university context, academic publishing, rhetoric and composition studies, and more. Scite provided the highest level of specificity and complexity more appropriate to academic research. 

Takeaway: None of the tools’ output was entirely original, but they would be very helpful in starting out.  

Tip: Try regenerating responses multiple times for more varied outputs and use the suggested questions to drill down into an area. 

Task Two: Literature reviews 

For literature reviews, where I prompted the tools to find academic articles related to developing student writing in STEM subjects, Elicit, Scholarcy and Scite produced a list of articles along with summaries. Note – Elicit didn’t grant access to all the papers it listed. I could use all the tools to query the number of participants, outcomes and interventions for each paper. Scite, unprompted, provided a theoretical framework: “cognitive process theory of writing.” While convenient, this framework shaped results. The resulting papers from each tool all differed, as did the results I got from using keywords and Boolean operators in UniMelb’s Discovery tool 

Takeaway: While GAI is useful as a starting point, you should use library databases for depth and breadth. The real potential of GAI tools is in being able to find specific information within the articles – provided you know what to look for. 

Tip: Use the tools’ capacity to summarise and query the papers to identify potentially relevant research when you have too many results. Then skim and scan those you have selected to identify the papers for reading. 

Task Three: Editing 

The tool I wanted to use for editing, Quillbot, was unavailable. The website simply couldn’t be reached. Keep in mind – these tools aren’t necessarily permanent and may disappear in a puff of digital smoke. 

So, I tried ChatGPT. I began by prompting it to describe the qualities of good academic writing. This provides the tool with context to guide its outputs. Next, I uploaded a paragraph from a piece of writing that had been published in an academic journal and prompted: “Act as a university tutor and give me feedback on this paragraph.” The tool suggested bullet points and breaking the paragraph up into smaller snippets. This advice sounded reasonable, but it had treated the writing as that of an undergrad. When I prompted it with “Act as a professional editor and edit the following paragraph for publication in an academic journal,” it did much the same.  

Takeaway: Beyond helping with grammar and some issues of clarity, these tools aren’t at the level of professional editing that academic writing requires…yet. 

Tip: Use these tools to find common spelling, grammar, and punctuation mistakes. But always check all the output yourself. 

Using GAI to work smarter, not harder 

GAI tools currently offer some clear benefits, some dangers, and a lot of questions. As someone who did their PhD in the before-AI times, I can easily see the limits of what these tools can do. The danger lies in lacking the experience and knowledge to see the limits of the tools. They could be time-savers in sorting through and prioritising a stack of papers, and spurs to thinking of different approaches or identifying literature you might not normally find. 

For early-stage researchers, developing essential research skills and the ability to skim, scan, and synthesise information and edit writing is critical. Because, as these tools continue to improve and take over more and more of the research process with the appearance of complete confidence, we will need savvy researchers capable of evaluating whether that confidence is warranted or not. 

About the author

Justin Park is a Learning Strategist with Student and Scholarly Services at the University of Melbourne. He has PhD in English Literature and Language and was a Gates Cambridge Scholar. 

Cite this Thing

You are free to use and reuse the content on this post with attribution to the author. The citation for this Thing is:

Park, Justin (2024). Ninth Thing: Generative AI. The University of Melbourne. Online resource. https://doi.org/10.26188/25339531

 

Featured image credit: Photo by Mohamed Nohassi on Unsplash


Leave a Reply

Your email address will not be published. Required fields are marked *