Some peer reviews of academic grant applications in Australia are alleged to have been written by ChatGPT – so much that the Australian Research Council (ARC) has issued a statement warning against the use of generative AI.
The claim surfaced on Twitter via a thread written by user ARC_Tracker, who is a researcher at an Australian university. In written correspondence with The Chainsaw, ARC_Tracker shared that this could have been “going on at some level since ChatGPT became available.”
Peer reviewing: How does it work?
In Australian academia, when a researcher intends to seek funds for a specific research, they would typically apply for a grant. A lengthy proposal is submitted outlining details of the research – ARC_Tracker tells us that they’ve come across ones that are 150 pages long.
A peer reviewer would then be assigned to assess a grant application, and determine whether or not the application is successful. Applications are scored on several factors including the investigating team behind it, innovation, project quality, and research environment. In short, it is not an easy process for every party involved.
Broadly speaking, there are two types of peer reviewers: expert and general.
“The experts are supposed to be more or less in the same field of study as the grant proposal and they’re supposed to provide detailed insight and critique, and write that up for the applicants to see and write a formal response to,” ARC_Tracker explains.
“The general assessors (who are only in the same very broad discipline, for example STEM) see all this correspondence and take that into account when trying to rank proposals from best to worst.
“In this case, some assessors have evidently used ChatGPT to write up their assessment. The trouble here is that these assessments don’t contain any critique, informed opinion, insight or, really, assessment at all. Instead they are just simplistic summaries of the grant proposal text,” ARC_Tracker tells iTNews.
ChatGPT in academia
One peer review assessment allegedly had the phrase “Regenerate response”, which was a giveaway that a researcher used ChatGPT to produce the text.
“I know many academics who’ve been trying it out for various situations where generic text or a summary is needed. But everyone realises that it doesn’t produce original ideas or any insight for you – it doesn’t do research or assess research. So using it to generate almost an entire grant proposal assessment is plain ridiculous,” ARC_Tracker added.
That peer reviewers are turning to AI tools like ChatGPT points to larger underlying issues when it comes to the pressure faced by academics.
“Assessors often aren’t given time to review anything in their academic workload model at universities… and so the peer review process is generally under a lot of pressure in Australia,” ARC_Tracker told iTnews.
“… Several 100 plus-page grant proposals land on your desk, each requiring anywhere from a day to review or, if you’re really very experienced, maybe only a few hours each. All to be done without an allocation of time from your employer to do it. You can see the time pressure here.”
The ARC responds
On June 30, the ARC released a statement presumably in response to social media buzz surrounding the issue. Citing the 2018 Australian Code for the Responsible Conduct of Research, the ARC cautioned that peer reviewers should “ensure the confidentiality of information received.” At the time of writing, no further action has been taken by the ARC.
Were other AI chatbots like Google’s Bard also used in Australia? It is very difficult to say, ARC_Tracker tells us. However, ARC_Tracker says what is common across such cases is the “complete lack of any real critique or assessment by the peer-reviewer, which is disappointing.”
“It lets down their colleagues and the peer review (sometimes described as ‘best solution we have’) to assessing original research.”