Asked what the layperson can do to protect themselves from payment scams and fraud during a panel discussion around AI and cyber security at SXSW Sydney, Australian Payments Plus (AP+) CIO May Lam told the room, “If anyone still has your voicemail, delete it”. Lam explained that scammers may not necessarily want you to answer your phone, but may be more interested in getting a sample of your voice to aid in deepfake creation.
Deepfakes are digital facsimiles of real people that have been edited to create a realistic but fake depiction of them. These can present as videos, photographs, or audio recordings, and have been used to facilitate both AI scams and fraud globally.
Deepfake scams emerged in 2017, but have rapidly grown in popularity and reach thanks to generative AI adoption. While most of the high-profile cases target business transactions in the hundreds of thousands, to millions of dollars, the democratisation of generative AI technology means that more criminals are able to apply the same tactics on smaller scales.
“First thing to remember is that all the criminal groups that do either cyber attacks, data breaches, [or] individualised scams, they all work together really well,” warns Financial Services and Insurance Lead for CyberX, Shameela Gonzalez.
“In our industry, we are still trying to enforce collaboration and it happens in really meaningful ways, but criminal groups have already been ten steps ahead of us and are already info sharing, and onselling information.”
According to eSafety Commissioner Julie Inman Grant, our deepfake detection tools are lagging behind the technology itself. Free, open-source apps can already be used to create deepfake imagery, but the eSafety approach in Australia mostly revolves around awareness, education, and removal of harmful material. Meanwhile, criminals are capitalising on the open-source access to create their own tools.
Fraudsters have already developed and are selling their own GPTs. FraudGPT, a large language model that creates content to simplify cyber attacks, has been reportedly available on the dark web and Telegram since 2023. The model reportedly lacks guardrails and can create phishing emails, scam landing pages, and direct users to external resources like hackers for hire, in a similar way that ChatGPT might write a social media caption. Subscriptions can be purchased for as low as $200 USD per month, which significantly lowers the barrier to entry for potential scammers.
Although fraud is still one of the vectors criminals work within, Gonzalez thinks that scamming will continue to gain prevalence.
“A scam fundamentally involves manipulation. The scam is intended for you to go and authorise that transaction yourself believing it is legitimate… Scams, I think, ended up being a far more effective way [for criminals] to bypass traditional fraud controls, and fraud prevention activities.”
Apart from turning off voicemail, the panellists encouraged users to set up 2FA, refrain from clicking links in emails and text messages, and to keep informed on scams using tools like Scamwatch.
This story first appeared on our sister website Reviews.org/au.