AI scams: Everything you need to know

Why you can trust SafeWise

For over 11 years, SafeWise experts have conducted independent research and testing to create unbiased, human reviews. We may earn money when you click links on our site, but this does not affect our recommendations. Learn how we test and review.

Why you can trust SafeWise

For over 11 years, SafeWise experts have conducted independent research and testing to create unbiased, human reviews. We may earn money when you click links on our site, but this does not affect our recommendations. Learn how we test and review.

Hannah Geremia
Sep 19, 2024
Icon Time To Read8 min read

AI is supposed to make our lives easier, from using Alexa to set an alarm or change the colour of your lights, to extracting important data from documents and making calculations. There’s even use for AI in the medical field, as it's able to identify potential future diseases by looking for patterns in patient health records. 

Even though AI can be used for good, the opposite is also true. Let's look at how AI scams work and how you can avoid them.

The difference between traditional and AI scams

Not only has AI led to the proliferation of new types of scams, like deepfakes and voice clones, but it can make your run-of-the-mill scams more convincing and easier to distribute. 

The emergence of AI means scammers and fraudsters can create mass scam emails at a pace that's faster than ever. AI makes it easier for amateur, unskilled hackers to execute cyber-attacks, especially ones once reserved for highly skilled cybercriminals. 

The European Union is currently regulating the use of AI in its first comprehensive AI law, and Australia is slowly catching up. Even though Australia already has several laws relating to privacy and online safety, the Australian Human Rights Commission supports and encourages the creation of an AI-specific piece of legislation.

AI in cybersecurity

AI doesn't just power digital assistants. It’s used by hackers to automate operations, make scams seem more believable, crack your passwords, and create stealthy malware

Scammers are not new to the prospect of AI. They’ve been using it for years to clone the voices of your loved ones, create deepfakes, and curate targeted phishing scams. This isn’t the extent of it, though. AI can also train computers to analyse and bypass your security defences and predict patterns in your behaviour. It can also analyse patterns in your passwords and create highly effective strategies for cracking them.

Types of AI scams

Voice clone scams

AI tools can be used to clone the voices of people scammers find on social media. Three seconds is all that's needed to create an audio snippet and a realistic clone of the victim's voice. The cloned voice is then run through an AI program, enabling the scammer to effortlessly replicate emotions like fear or happiness over the phone. 

Pretending to be the target, the scammer will call the victim’s family and demand money or gift cards. The scammer will typically try to convince the person on the other end that their loved one is in real danger, and that there are dire consequences for hanging up or failing to make a payment. 

The goal of the scammer is to send the recipient of the call into fight or flight mode, giving them minimal time to think and respond logically. While no one wants to see their loved ones hurt or in danger, the best thing you can do is to hang up immediately and call the loved one directly. In future, think twice about answering a call from an unknown number, especially if they claim to be from someone you know.

Notepad
AI voice scams are becoming common

A study of 7,000 people conducted by McAfee found that 77% had fallen victim to an AI voice scam, and had lost money as a result.

Deepfake images and videos

Deepfakes are manipulated images, audio, and video that are created using artificial intelligence techniques. They can allow the scammer to make a person, whether that be a celebrity or your next-door neighbour, say or do anything they please before posting it online.

Jeff Bezos, Elon Musk, and even Taylor Swift have all been victims of deepfake scams in recent years. These scams were propagated by Quantum AI, a company known for using doctored footage and artificially generated audio to prompt users into fuelling a cryptocurrency scheme. 

Deepfake scams try to convince viewers that their product or platform is legitimate and worth investing in because it’s been endorsed by someone the public knows or trusts.

In online dating, deepfake images can also used for sexual extortion. If a scammer gets ahold of any of your photos, they can create datasets that blend your public photos with images or videos that contain pornography, and blackmail you in exchange for money or sensitive information.

Fake websites

Malicious links embedded in email and text scams usually lead to fake websites. Not too long ago, scamming people with a fake website or web store required a certain level of skill and expertise. Now, generative AI makes it possible to conduct large-scale fraud campaigns by combining coding, text, and fake images. Sophisticated scammers will also use this method in conjunction with other fake websites and social media advertisements, making it even more difficult for users to tell they’re visiting a fake, AI-generated website. 

Scam emails and phishing scams

There’s nothing new about email scams and phishing – scammers have long been pretending to be government agencies and banks to get ahold of your sensitive information. 

However, AI has revolutionised the way scammers go phishing. Generative AI tools like ChatGPT can help scammers easily match the tone of the government body they’re trying to impersonate, as well as correct the misspellings or grammar mistakes we typically associate with scam emails. AI tools can also distribute polished, personalised copy and emails, making it much harder for the average consumer to determine it’s a scam. 

Even though bots like ChatGPT have built-in functions to prevent people from using it for nefarious purposes, these can easily be circumvented. Not unlike that phone you had in high school, generative AI bots can be jailbroken, meaning hackers can remove the guardrails of intended use to trick AI into bad behaviour.

Light Bulb
ChatGPT's evil twins

Malicious alternatives like WormGPT and FraudGPT are popular amongst fraudsters and hackers facilitating cyber-attacks. The AI tools work similarly to ChatGPT but are used to create phishing scams, malware, and malicious code. 

Romance scams

Online dating apps have always been rife with scams – there’s a reason the Nigerian Prince scam is so popular. However, in recent times, scammers have been taking to generative AI to make scamming lonely Australians easier. 

Instead of stealing the photo of an unsuspecting man or woman online, scammers might use generative AI to produce a picture of someone who does not exist. Tools like Midjourney and DALL-E are frequently used to create images of a person a scammer is pretending to be, whether that’s an Australian redhead or a blonde young man looking for an older companion. 

Even if the scammer chooses to do it the old-fashioned way (stealing a real person’s images), they may still use AI chatbots to have realistic text conversations. These bots can be trained to be likable or adopt a certain personality. 

To cast as wide a net as possible, scammers will join a multitude of dating apps, using AI to create fake profiles and chat with hundreds of people at once – all without having to lift a finger. Since a lot of dating sites have built-in AI detection, these bots will try to get you off the app, and onto another one as soon as possible. Dating sites are frequently on the lookout for fake or bot accounts, and if they get caught, the scammer's account will be deleted. 

Romance scams always have an end goal – whether that's to swindle you out of your life savings or to invest in fake cryptocurrency schemes. Some will even build up your trust for weeks or sometimes months, then strike and leave without a trace. While vague or unnaturally fast responses are useful ways to tell if you’re talking to a bot, one that almost always works is proposing a video call, which they’ll typically decline.

AI investment and financial scams

Online investor platforms sometimes offer a website or app with some sort of AI integration. If you’re trading on a platform that claims to use AI, make sure it's registered! Check the Investor Alert List for any unregistered investment ‘professionals’ who are known for scamming investors. Additionally, ensure the platform you’re trading with owns a current Australian Financial Services (AFS) license or an Australian credit license from ASIC. 

Scammers often claim that AI can generate a sizeable profit by trading cryptocurrency on your behalf. Any platform that promises a high return with little to no risk or claims their ‘AI can pick guaranteed stock winners’ should raise some red flags.

It's not unlikely for a platform to run an investment scheme that leverages the popularity of AI to prompt people into investing. While it can seem exciting to invest in a company that claims to use AI, proceed with caution. These systems can often lure investors into schemes and use AI-related buzzwords to promise guaranteed returns. 

If you think an investment platform might be using deep-faked celebrities to promote themselves, reconsider investing. If you're unsure whether or not the celebrity endorsement is legit, ask yourself why this person is endorsing this particular investment.

Bell
Don't invest in strangers

Beware of any online contacts you barely know or have never met asking you to invest. Even if the investment or cryptocurrency platform seems real, this practice, along with a promise of high-yield returns, is common in pig butchering scams.

Distinguishing AI from reality

What makes AI scams dangerous is how hard they can be to spot. Certainly more so than conventional scams. Advances in technology mean these scams can appear more genuine and can cast a wider net than traditional scams ever could. 

Distinguishing a deep fake or an AI-generated image or video from a real one can be challenging, especially due to the sophistication of the technology used to create them. For future reference, look at the ‘person’s’ face. Faces are notoriously hard for AI to get right, and we can get pretty good at noticing inconsistencies. Does the person blink too much or too little? Do the lips match up to what the person is saying? What about their skin? Is it unnaturally smooth or wrinkled? Do their hands look strange, or are they meshed together or overlapping? 

If you're trying to determine whether or not the voice you spoke to on the phone was cloned or real, use your common sense. Did the caller claim your loved one has gotten into a car accident and needs a large sum to fix their car, but they don’t drive? If any call seems out of character or suspicious, take it with a grain of salt and call them back on the number in your contacts.

Protecting yourself and your family

AI scams can be scary, but there are easy steps you can take to avoid falling victim.

  • Use a safe word only you and your family know. Ask for this word in case of an emergency. If you’re contacted by someone claiming to be your son, daughter, or parent asking for money or claiming they’re in danger, ask them for the safe word.
  • Hang up and call them back. Commonly, scammers who claim to be from a government agency or high-profile investment platform will use high-pressure tactics that con you into believing you don’t have much time. They might say, ‘This is a serious, once-in-a-lifetime investment opportunity, and you need to invest now’, or ‘You owe the ATO X amount of money, if you don’t pay it now, over the phone, there will be serious consequences’. If you’re feeling sceptical about a phone call, simply hang up, and call back on your loved ones' known number or the company’s official contact information listed on their website. 
  • Always be sceptical. Anything that sounds too good (or bad!) to be true, usually is.
  • Never provide any personal or financial information or click on any links provided in scam emails or texts. Even if they purport to come from your ‘bank’. 
  • Educate yourself. Take time to learn about what AI can do, and how it can be used to scam people. By understanding how these scammers use AI, you can better recognise scams and avoid falling victim. 
  • Implement email filtering tools that use AI and machine learning to detect and block phishing emails.
  • Strengthen your passwords and keep your personal information safe by using two-factor authentication and a password manager.

Final word

AI technology can take scamming to a new level. While Nigerian Prince scams are scarcer than they used to be, highly sophisticated phishing emails and scams have taken their place. AI is capable of mimicking the desired tone of any government body, and can even impersonate your loved ones, so it's important to exercise caution and always keep your financial and personal information protected. 

Hannah Geremia
Written by
Hannah has had over six years of experience in researching, writing, and editing quality content. She loves gaming, dancing, and animals, and can usually be found under a weighted blanket with a cup of coffee and a book.

Recent Articles