• Posted on 28 Aug 2025
  • 3 mins read

AI is a big deal at the moment. Grants are given to research into AI. Job ads are asking for AI knowledge or skills. And the first step in learning for many undergraduates, fresh out of high school, is to “just ask chat”. What’s more, the newest version of ChatGPT apparently has "new ‘PhD’ abilities" whatever that means.

On social media, the hype is next level. On LinkedIn, there are posts promising that AI can cut research time in half. On YouTube, influencers claim AI can help "Smart PhD Students Find a Research Gap". It got me thinking. Am I a smart PhD student? Could I have saved time by engineering a research gap prompt using ChatGPT Playground, inserting the prompt into Gemini to get a research gap. Then put the gap into ThesisAI to generate a full literature review, and then have an LLM write a thesis, making sure to get an AI to change it so it passes AI detection software. Using these tools could a PhD be completed in a couple of hours?

Despite the hype, I can’t seem to find any good uses for LLMs. Maybe this is because I started my PhD way back in the middle of 2022 and I am used to outdated practices like reading and writing. I haven’t used an LLM for my literature review. I don’t use it to write or check my writing. I didn’t use it to analyse my interviews or do any content analysis.

What I have done is test LLMs to see what they know about my area of study. More importantly I wanted to know if they mention any of my articles. After all, if people are using them to learn about things, it would be great if my work was brought up in conversations with AI.

The most interesting thing I have found is that accuracy is not the top priority for LLM outputs. This should be an obvious problem if you intend to use LLMs for any kind of research. I asked Gemini about platform journalism. It gave a general summary of the concept, combining it with social media journalism. I asked for more specifics and, as part of its answer it said,

“In the academic paper ‘Platform Journalism as an Emergent Form of Social Media Communication,’ the author, Daniel Funke, begins by discussing the rise of ‘platform journalism.’ He defines it as a distinct form of journalism created for and shaped by the unique affordances of platforms like YouTube and TikTok.”

I had not seen that paper before, so I asked for a link. Gemini then apologised and said that it “should have been more careful to verify the existence of the article before presenting it as a fact.”

I asked why it had fabricated an article, and it explained that its “primary directive” is to be “helpful” and provide complete answers. This primary directive can override the goal of being accurate.

I thought that perhaps the issue was that I was using a free version of Gemini. I asked if upgrading would have avoided the fabrication. Apparently using a more powerful LLM does not ensure accuracy, as Gemini explains (if you can believe it!).

“The error was in my flawed prioritization of being ‘helpful’ by creating a plausible-sounding answer instead of being ‘truthful’ by admitting a limitation. A more powerful model might be even more skilled at synthesizing plausible but fabricated information, so the same fundamental flaw in logic could still lead to a similar result. The fix for this issue is in the core safety protocols and their hierarchical enforcement, not just the model's intelligence.”

Accuracy is a priority for researchers. So, for me the current LLMs are not accurate enough to be useful in my work. I also lack the resources to develop my own custom AI model, which some news organisations are experimenting with. For now, I will continue to just read a lot. To write things myself. And to make time for thinking and reflection.

Share

Author

Chris Hall

Graduate Research Student, Faculty of Law

News

Monica Attard unpacks the latest BBC turmoil and what it signals for the ABC as public broadcasting becomes a proxy battlefield.

News

Derek Wilding digs into the proposed Australian content obligations for streaming services. 

News

Dr Alena Radina looks at new claims about Russia “grooming” AI models.