Scroll Top

Generative AI is accelerating financial fraud on an epic scale and it’s getting worse


AIs like ChatGPT and GPT4 have made it easier than ever before for financial scammers to run cons and defraud the public, and they’re only just getting started.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

“I wanted to inform you that Chase owes you a refund of $2,000. To expedite the process and ensure you receive your refund as soon as possible, please follow the instructions below: 1. Call Chase Customer Service at 1-800-953-XXXX to inquire about the status of your refund. Be sure to have your account details and any relevant information ready …”


See also
Banks are using mind reading technology to interview graduates


If you banked at Chase and received this note in an E-Mail or text, you might think it’s legit. It sounds professional, with no peculiar phrasing, grammatical errors or odd salutations characteristic of the phishing attempts that bombard us all these days. That’s not surprising, since the language was generated by ChatGPT, the Artificial Intelligence (AI) chatbot released by tech powerhouse OpenAI late last year.

As a prompt, the would be scammers simply typed into ChatGPT, “Email John Doe, Chase owes him $2,000 refund. Call 1-800-953-XXXX to get refund.”


The Future of Crime, by keynote Matthew Griffin


“Scammers now have flawless grammar, just like any other native speaker,” says Soups Ranjan, the cofounder and CEO of Sardine, a San Francisco fraud-prevention startup. Banking customers are getting swindled more often because “the text messages they’re receiving are nearly perfect,” confirms a fraud executive at a US digital bank – after requesting anonymity.

In this new world of Generative AI, or deep learning models that can create content based on information they’re trained on, it’s easier than ever for those with ill intent to produce text, audio and even video that can fool not only potential individual victims, but the programs now used to thwart fraud. In this respect, there’s nothing unique about AI – the bad guys have long been early adopters of new technologies, with law enforcement scrambling to catch up.

Today, generative AI is threatening, and could ultimately make obsolete, state-of-the-art fraud-prevention measures such as voice authentication and even “liveness checks” designed to match a real-time image with the one on record. Synchrony, one of the largest credit card issuers in America with 70 million active accounts, has a front-row seat to the trend.


See also
Futurist Keynote, Paris: I, The Company of 2040, Accenture


“We regularly see individuals using deepfake pictures and videos for authentication and can safely assume they were created using generative AI,” Kenneth Williams, a senior vice president at Synchrony, said to reporters.

In a June 2023 survey of 650 cybersecurity experts by New York cyber firm Deep Instinct, three out of four of the experts polled observed a rise in attacks over the past year, “with 85% attributing this rise to bad actors using generative AI.” In 2022, consumers reported losing $8.8 billion to fraud, up more than 40% from 2021, the U.S. Federal Trade Commission reports. The biggest dollar losses came from investment scams, but imposter scams were the most common, an ominous sign since those are likely to be enhanced by AI.

Criminals can use generative AI in a dizzying variety of ways. If you post often on social media or anywhere online, they can teach an AI model to write in your style. Then they can text your grandparents, imploring them to send money to help you get out of a bind. Even more frightening, if they have a short audio sample of a kid’s voice, they can call parents and impersonate the child, pretend she has been kidnapped and demand a ransom payment. That’s exactly what happened with Jennifer DeStefano, an Arizona mother of four, as she testified to Congress in June.

It’s not just parents and grandparents. Businesses are getting targeted too. Criminals masquerading as real suppliers are crafting convincing E-Mails to accountants saying they need to be paid as soon as possible – and including payment instructions for a bank account they control. Sardine CEO Ranjan says many of Sardine’s fintech-startup customers are themselves falling victim to these traps and losing hundreds of thousands of dollars.


See also
Unmanned military drones team up with manned fighter jets during BAE trial


That’s small potatoes compared with the $35 million a Japanese company lost after the voice of a company director was cloned – and used to pull off an elaborate 2020 swindle. That unusual case, first reported by Forbes, and others like it, was a harbinger of what’s happening more frequently now as AI tools for writing, voice impersonation and video manipulation are swiftly becoming more competent, more accessible and cheaper for even run-of-the-mill fraudsters. Whereas you used to need hundreds or thousands of photos to create a high-quality deepfake video, you can now do it with just a handful of photos, says Rick Song, cofounder and CEO of Persona, a fraud-prevention company.

Just as other industries are adapting AI for their own uses, crooks are too, creating off-the-shelf tools – with names like FraudGPT and WormGPT – based on generative AI models released by the tech giants.

In a YouTube video published a while ago Elon Musk seemed to be hawking the latest crypto investment opportunity – a $100,000,000 Tesla sponsored giveaway promising to return double the amount of Bitcoin, Ether, Dogecoin or Tether cryptocurrency participants were willing to pledge.

“I know that everyone has gathered here for a reason. Now we have a live broadcast on which every cryptocurrency owner will be able to increase their income,” the low-resolution figure of Musk said onstage. “Yes, you heard right, I’m hosting a big crypto event from SpaceX.”


See also
Futurist Virtual Keynote, USA: Life in 2030, State of California


Yes, the video was a deepfake – scammers used a February 2022 talk he gave on a SpaceX reusable spacecraft program to impersonate his likeness and voice. YouTube has pulled this video down, though anyone who sent crypto to any of the provided addresses almost certainly lost their funds. Musk is a prime target for impersonations since there are endless audio samples of him to power AI-enabled voice clones, but now just about anyone can be impersonated.


Earlier this year, Larry Leonard, a 93-year-old who lives in a southern Florida retirement community, was home when his wife answered a call on their landline. A minute later, she handed him the phone, and he heard what sounded like his 27-year-old grandson’s voice saying that he was in jail after hitting a woman with his truck. While he noticed that the caller called him “grandpa” instead of his usual “grandad,” the voice and the fact that his grandson does drive a truck caused him to put suspicions aside. When Leonard responded that he was going to phone his grandson’s parents, the caller hung up. Leonard soon learned that his grandson was safe, and the entire story – and the voice telling it – were fabricated.

“It was scary and surprising to me that they were able to capture his exact voice, the intonations and tone,” said Leonard. “There were no pauses between sentences or words that would suggest this is coming out of a machine or reading off a program. It was very convincing.”

Elderly Americans are often targeted in such scams, but now we all need to be wary of inbound calls, even when they come from what might look familiar numbers–say, of a neighbour.

“It’s becoming even more the case that we cannot trust incoming phone calls because of spoofing (of phone numbers) in robocalls,” laments Kathy Stokes, director of fraud-prevention programs at AARP, the lobbying and services provider with nearly 38 million members, aged 50 and up. “We cannot trust our E-Mail. We cannot trust our text messaging. So we’re boxed out of the typical ways we communicate with each other.”


See also
This new shape shifting material responds to light


Another ominous development is the way even new security measures are threatened. For example, big financial institutions like the Vanguard Group, the mutual fund giant serving more than 50 million investors, offer clients the ability to access certain services over the phone by speaking instead of answering a security question.

“Your voice is unique, just like your fingerprint,” explains a November 2021 Vanguard video urging customers to sign up for voice verification. But voice-cloning advances suggest companies need to rethink this practice. Sardine’s Ranjan says he has already seen examples of people using voice cloning to successfully authenticate with a bank and access an account. A Vanguard spokesperson declined to comment on what steps it may be taking to protect against advances in cloning.

Small businesses, and even larger ones, with informal procedures for paying bills or transferring funds are also vulnerable to bad actors. It’s long been common for fraudsters to E-Mail fake invoices asking for payment -bills that appear to come from a supplier. Now, using widely available AI tools, scammers can call company employees using a cloned version of an executive’s voice and pretend to authorize transactions or ask employees to disclose sensitive data in “vishing” or “voice phishing” attacks.

“If you’re talking about impersonating an executive for high-value fraud, that’s incredibly powerful and a very real threat,’’ says Persona CEO Rick Song, who describes this as his “biggest fear on the voice side.”

Increasingly, the criminals are using generative AI to outsmart the fraud-prevention specialists – the tech companies that function as the armed guards and Brinks trucks of today’s largely digital financial system.

One of the main functions of these firms is to verify consumers are who they say they are – protecting both financial institutions and their customers from loss. One way fraud-prevention businesses such as Socure, Mitek, and Onfido try to verify identities is a “liveness check” -they have you take a selfie photo or video, and they use the footage to match your face with the image of the ID you’re also required to submit. Knowing how this system works, thieves are buying images of real driver’s licenses on the dark web. They’re using video-morphing programs – tools that have been getting cheaper and more widely available – to superimpose that real face onto their own. They can then talk and move their head behind someone else’s digital face, increasing their chances of fooling a liveness check.


See also
MIT's new secure, low power chip helps create the "Internet of Secure Things"


“There has been a pretty significant uptick in fake faces – high-quality, generated faces and automated attacks to impersonate liveness checks,” says Song. He says the surge varies by industry, but for some, “we probably see about ten times more than we did last year.” Fintech and crypto companies have seen particularly big jumps in such attacks.

Fraud experts say they suspect well known identity verification providers (for example, Socure and Mitek) have seen their fraud-prevention metrics degrade as a result. Socure CEO Johnny Ayers insists “that’s definitely not true” and says their new models rolled out over the past several months have led fraud-capture rates to increase by 14% for the top 2% of the riskiest identities. He acknowledges, however, that some customers have been slow in adopting Socure’s new models, which can hurt performance.

“We have a top three bank that is four versions behind right now,” Ayers reports.

Mitek declined to comment specifically on its performance metrics, but senior vice president Chris Briggs says that if a given model was developed 18 months ago, “Yes, you could argue that an older model does not perform as well as a newer model.” Mitek’s models are “constantly being trained and retrained over time using real-life streams of data, as well as lab-based data.”

JPMorgan, Bank of America, and Wells Fargo all declined to comment on the challenges they’re facing with generative AI-powered fraud. A spokesperson for Chime, the largest digital bank in America and one that has suffered in the past from major fraud problems, says it hasn’t seen a rise in generative AI-related fraud attempts.

The thieves behind today’s financial scams range from lone wolves to sophisticated groups of dozens or even hundreds of criminals. The largest rings, like companies, have multi-layered organizational structures and highly technical members, including data scientists.


See also
SRI unveils Minority Report like touchless biometric technologies


“They all have their own command and control center,” Ranjan says. Some participants simply generate leads – they send phishing E-Mails and phone calls. If they get a fish on the line for a banking scam, they’ll hand them over to a colleague who pretends he’s a bank branch manager and tries to get you to move money out of your account. Another key step: they’ll often ask you to install a program like Microsoft TeamViewer or Citrix, which lets them control your computer.

“They can completely black out your screen,” Ranjan says. “The scammer then might do even more purchases and withdraw [money] to another address in their control.” One common spiel used to fool folks, particularly older ones, is to say that a mark’s account has already been taken over by thieves and that the callers need the mark to cooperate to recover the funds.

None of this depends on using AI, but AI tools can make the scammers more efficient and believable in their ploys.

OpenAI has tried to introduce safeguards to prevent people from using ChatGPT for fraud. For instance, tell ChatGPT to draft an E-Mail that asks someone for their bank account number, and it refuses, saying, “I’m very sorry, but I can’t assist with that request.” Yet it remains easy to manipulate.

OpenAI declined to comment for this article, pointing us only to its corporate blog posts, including a March 2022 entry that reads, “There is no silver bullet for responsible deployment, so we try to learn about and address our models’ limitations, and potential avenues for misuse, at every stage of development and deployment.”

Llama 2, the large language model released by Meta, is even easier to weaponize for sophisticated criminals because it’s open-source, where all of its code is available to see and use. That opens up a much wider set of ways bad actors can make it their own and do damage, experts say. For instance, people can build malicious AI tools on top of it. Meta didn’t respond to  requests for comment, though CEO Mark Zuckerberg said in July that keeping Llama open-source can improve “safety and security, since open-source software is more scrutinized and more people can find and identify fixes for issues.”

The fraud-prevention companies are trying to innovate rapidly to keep up, increasingly looking at new types of data to spot bad actors.

“How you type, how you walk or how you hold your phone – these features define you, but they’re not accessible in the public domain,” Ranjan says. “To define someone as being who they say they are online, intrinsic AI will be important.” In other words, it will take AI to catch AI.

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This