AI scams deepfake artificial intelligence AI image PIM eyes Spanish crypto scammer:

5 Common AI Scams To Look Out For In 2023

7 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

AI scams: In late January, a video surfaced of popular podcasters Joe Rogan and Andrew Huberman seemingly singing praise for a libido-boosting product called ‘Alpha Grind’. Rogan and Huberman discussed its many supposed health benefits and pointed to how the supplement had surged up the ranks to become an instant best seller on the Amazon Store. Except there was one big problem: it was a ‘deepfake’ scam generated by AI to flog testosterone pills.

While the tech-peddling advocates of AI promise that this powerful new technology will vastly increase productivity, transform modern education and revolutionise the domain of human creativity, there seems to be little to no discussion of the obvious pitfalls in our insatiable appetite for this tech.

While there are near infinite downsides to this new tech, the clearest and most concerning danger that AI poses is that it will empower scammers in ways that we, as a species, simply aren’t prepared for.

AI scams: Simple scams already do significant damage

If you think that fears of AI scams are just paranoid tech haters whipping up a fear campaign, consider for a moment how successful some of the most rudimentary scams have been, particularly when they’re aimed at older internet users who aren’t as ‘digitally native’ as the younger generation.

According to a report from the ABC, the Hi Mum scam, where fraudsters pose as a family member and convince people to send money via WhatsApp, successfully duped more than 11,000 Australians into handing over a total of AU$7.2 million over the course of 2022.

If that’s the sort of damage that can be done via simple text messaging, imagine the potency of scams where people’s face, voice and mannerisms can be perfectly duplicated through the use of AI.

When it comes to artificial intelligence, the genie is well and truly out of the bottle and there doesn’t seem to be any clear discussion around how we might put it back in. With this in mind, we all need get up to speed on the many ways that scammers could potentially use AI tools to trick, misinform and swindle us.

Here’s a quick breakdown of the five most nefarious and common AI scams set to appear this year and some good practices for combatting them.

AI scams to avoid in 2023: 5 most common

1) ChatGPT scam: phishing emails

Following its release to the public in November last year, OpenAI’s viral chatbot ‘ChatGPT-3’ has all but broken the internet. The chatbot has blasted its way into the record books, surpassing 100 million monthly active users in just two months, aced finals exams from the prestigious Wharton Business School and even landed itself a high-paying coding job at Google.

ChatGPT
OpenAI’s virtual assistant ChatGPT has gotten surprisingly good at writing scam emails.

But that’s not all ChatGPT is being used for. According to new research from digital security firm WithSecure, the AI-powered virtual assistant is particularly good at writing phishing emails, designed to trick readers into handing over sensitive information like bank account details and passwords.

The bottom line of the research is that large language model (LLM) tools like ChatGPT give criminals a serious boost in drafting more convincing written communications for their cyberattacks. Researchers also noted that there is currently little in the way protections against this, so it remains difficult to tell which emails have been written by a human and which have been churned out by an AI-powered chatbot.

Solution: OpenAI Text Classifier

One tool that can help you identify these scams was released by ChatGPT parent company OpenAI, dubbed the ‘AI Text Classifier’. This tool can detect which work has been written by a robot.

Unfortunately the Text Classifier tool is still very much in its infancy, and only recognises AI-written text with a 26% success rate. According to data from OpenAI, the tool works best on long pieces of text, but email scams are often short and sweet, meaning that the tool — in its current form at least — isn’t going to be useful.

2) Voice cloning AI scam

This is one of the most pernicious AI tools in the new suite used by the scammers in the Joe Rogan video. If you’re someone who has a large amount of audio of yourself online, you’re a prime target for a ‘voice cloning’ scam, as these AI tools download, break it down into individual words and then allow bad actors to type out the phrases they want to you to say.

The voice cloning aspect of AI tech will undoubtedly lead to a surge of phone call scams. Generative AI that can achieve near-perfect speech cloning, combined with auto-dialling software could spell disaster for everyday people in the coming months.

For deeper insight into how dastardly these voice cloning scams can get, feel free to watch the following Instagram video from Metav3rse founder Roberto Nickson.

Solution: speak to your family and check in before responding

If you’re a digital creator and have large amounts of audiovisual content of yourself online, it could be worth alerting your family to these types of scams, and make sure they’re aware if a seemingly suspicious video, phone call or multimedia messages ever that heads their way.

Apart from talking to your family about this, if someone ever sends you a video where they’re asking for money or other sensitive information like passwords or bank account details, definitely reach out to the real version of this person and check that it’s really them.

3) AI-generated ‘deepfake’ scams

The last few months have seen a rapid uptick in the type of generative AI tools that people can use to alter videos to make facial movements match up with similarly AI-generated voice cloning. Deepfakes of this kind are rampant across social media platforms like Twitter.

Recently, an AI-generated deepfake video of Joe Biden went viral for all the wrong reasons. In a seemingly real, 2-minute-long video clip, Biden levelled visceral, hateful statements at members of the trans community. Another, more humorous example is one of Biden telling a joke about the naming of ‘Sneed’s feed and seed’ store.

While many digitally native users can instinctively spot deepfakes upon closer investigation, if a cyber attacker ever spun up a video of you, crafted in similar fashion to the one of Joe Biden, would you be sure that all the members of your family and friend circle would be able to distinguish truth from fiction?

A deeply troubling deepfake trend

While deepfakes of popular figures designed to convincingly spread misinformation frequent, the most common types of deepfake are those of the pornographic genre. Owing to the nature of these types of deepfakes, exact figures on the proliferation of them remains hazy. A 2020 report from cybersecurity firm Sensity found that the number of pornographic deepfakes have doubled every six months.

These sorts of deepfakes are unsurprisingly used to target women. Pornographic deepfakes have been used frequently to humiliate, abuse and extort overwhelmingly female targets. As such, governments from around the world — most recently the UK — have begun introducing legislation that enforce severe penalties for attackers that create deepfake porn with the intent of causing harm to others.

Solution: check for inconsistencies

Deepfakes are often hyper-realistic but they’re still not perfect. These videos will often contain minor flaws in lighting, inconsistencies with facial movements and dramatic disparities between the fidelity of the audio and quality of the video. If you’re sceptical about a video, really interrogate it and see if you can notice any strange occurrences in the way they speak.

4) AI ‘romance’ scams

Despite Valentines Day having just passed us by, romance scams — where fraudsters pose as romantic interests and slowly building rapport with their targets before asking them for money — are also an area where AI could give criminals a serious boost.

Even without the help of AI, these scams are already extremely widespread, with Aussies losing more than AU$40 million in 2022.

AI
Romance scams get an artificial intelligence boost.

Now it appears that criminals are beginning to train AI chatbots to initiate messages posing as intimate partners. At current, romance scams are capped by the number of humans willing to engage in such a grift. A well-trained AI chatbot that becomes indistinguishable from a human, could potentially have conversations with thousands of targets simultaneously, massively increasing the overall profitability of these scams for their perpetrators.

Solution: keep the romance real

In the digital age it can be hard to constantly verify the legitimacy of identities online. The decision to send money to someone else is a deeply personal decision, but it could be wise to have met your online partner at least once in real life before providing any sort of financial assistance…

5) Data aggregating AI tools

Generative AI technology that becomes proficient at handling large amounts of data could also allow cybercriminals to target vulnerable groups more efficiently.

One example of this would be to training AI programs to sift through data from major companies, such as the databases that were stolen from Optus and Medibank last year. This could help criminals target elderly people, people with disabilities, or people in financial hardship.

Solution: check in with friends and family

While this one isn’t the type of operation that can be seen with the naked eye, make sure to check in with friends and family, especially those who are older and less proficient with technology. It’s seems incredibly likely that AI will be harnessed by criminals to target these demographics with renewed efficiency as breakthroughs in the industry occur.

Stay alert

As artificial intelligence technology enters into its golden age, walking in step and time with it, are an entirely new breed of digital fraudster, armed with a suite of AI-enhanced tools. From hyper-realistic fake media to programs that sift through data with the utmost precision, never before have bad actors had so many tools they can leverage to attack vulnerable people. So stay alert and make sure your friends and family are aware of the problems posed by this technology.

Until cybersecurity firms begin to catch up to the break-neck pace progress in this field, its up to us, the everyday users of the internet to stand vigilant against the coming wave of AI scams.