A university in the US has been criticised for using ChatGPT to write an email to students about a mass shooting at another school.
First reported by the university’s student newspaper, Vanderbilt University in Nashville, Tennessee sent an email addressing a mass shooting that happened at Michigan State University.
At the bottom of the email, the letter ended with: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.”
The email quickly drew backlash from students and staff for using AI to respond to a horrific event.
“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” a Vanderbilt student, originally from Michigan, said in an interview with the student paper.
ChatGPT’s Widespread Use
Since its launch in December 2022, OpenAI’s viral AI chatbot has stoked concern among educators: teachers are worried that students may use it to produce essays.
So far, ChatGPT proved itself to be competent enough to nearly pass the US medical exam, the Wharton MBA exam, and the US bar exam.
In Australia, several states including Victoria, New South Wales, Tasmania, Western Australia have banned the chatbot in public schools. But some internet-savvy Gen Z students are circumventing such bans with what’s known as a “handwriting hack” — hooking the AI chatbot up with a 3D printer to produce a written text.
In February, Fishbowl, a social networking app for professionals, revealed in a survey that 43% of professionals have used AI tools, including ChatGPT for work-related tasks. In that number, 70% of them admit to using ChatGPT for work and not telling their boss.
“What’s The Difference, Really”
On Twitter, users expressed disbelief at the move.
“One of Vanderbilt university’s EDI offices got caught using ChatGPT to write their latest “our condolences over the recent mass shooting” email and my cynical response is what’s the difference, really.” one user tweeted.
Another professional said: “The initial moral panic about ChatGPT was that students would cheat on essays. Turns out, the first major use case was administrators faking empathy.”
Vanderbilt University’s spokesperson responded to the student paper clarifying that it does not use AI tools to generate messages: “We believe all communication must be developed in a thoughtful manner that aligns with the university’s core values.”
The university followed up with an email apologising to the community. The Associate Dean for EDI at Vanderbilt said that it was “poor judgement” to use ChatGPT to write the email.