Content warning: The following article discusses instances of child abuse which may be distressing to some readers.
AI-generated child abuse material is here, in an evil episode of “What if AI is used for bad?” The humans in the clips may not real, or maybe they have been created in a person’s likeness, but ‘deepfakes’ still pose a major ethical and legal issue. We asked an expert to explain how Australia is cracking down on this emerging tech.
AI and child abuse material
That epoch in the human timeline has come where lawmakers are scrambling to protect kids from perverts that know how to use artificial intelligence (AI) tools.
In Spain, the Policía Nacional have arrested a computer programmer who was busted using AI image software to generate “deep fake” child abuse material.
The images and film clips were made by combining existing child abuse material with the faces of child stars and other victims. Totally new footage and scenes sprung to life, which the paedophile then distributed.
As the Spanish justice system deals with the disturbing crime, lawmakers across the world have been scrambling to make sure that their laws cover the very new field concerning AI-generated images and “Deepfakes”.
AI-generated child abuse material. Is it illegal?
Here in Australia, both federal and state laws say that it is a criminal offence to possess, control, produce, supply, or obtain child abuse material for personal use or for use by another.
Jarryd Bartle is an expert in laws surrounding porn and general vice. He worked for a period of time at the Eros Association as a policy advisor for the porn industry in Australia. Now, he is an Associate Lecturer in Criminology and Justice Studies at RMIT University. He says that if someone creates a deepfake of child abuse material, they are going hard into illegal territory.
Under our child abuse material laws, even a representation of somebody under the age of 18, like a cartoon or an animated image, or even a written description can constitute child abuse material.
Bartle says, “And it’s not just creating it that is illegal, possessing it is illegal as well. Both fall within the criteria of child abuse material. You can be prosecuted for possession as well as manufacture or production. As long as the imagery appears to be somebody under the age of 18, that can fall within the definition of child abuse material. So from the perspective of law enforcement, if it looks like somebody is a minor in a deepfake scenario, then that would be child abuse material regardless of how it was created.”
Deepfakes in Australia
Bartle says that while there haven’t been any Australian deepfake cases prosecuted (yet), the Spanish case will go to trial soon, and law enforcement here will likely take an interest in it, simply to see “how they’ve gathered evidence in that case and how prosecutors have developed their case.”
“It is an unusual situation, and it’s not something that either police or prosecutors here would have much experience in,” Bartle says.
So while yes, AI-generated child abuse material is as illegal as the real thing, law authorities in Australia remain somewhat ill-equipped to deal with investigating any potential cases for the time being. But with cases internationally potentially providing a framework for not only investigation but prosecution, that may not last long.
Help is available.
If you need mental health support, please call Lifeline on 13 11 14 or chat online.
Under 25? You can reach Kids Helpline at 1800 55 1800 or chat online.
If you require immediate assistance, please call 000.