First of a kind attack as criminals clone a CEO’s voice and con UK energy firm out of $243,000

WHY THIS MATTERS IN BRIEF

As AI’s get better at creating fake content, or synthetic content, this type of attack will be the start of a new worrying trend.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

After Google’s Duplex demonstration last year it’s clear that Artificial Intelligence’s ability to re-create realistic human voices, including celebs like Bill Gates, has now firmly blown past uncanny valley, the point at which people can’t tell synthetic content, or fake content, which includes everything from synthetic images and synthetic video through to synthetic audio, from the real deal. And now in what experts think could be a world first criminals have used AI based software to impersonate a CEO’s voice to demand the fraudulent transfer of €220,000 ($243,000) from a UK energy firm – and the transfer went ahead.

 

READ
US Designates Election Infrastructure as 'Critical'

 

In the attack the CEO of the UK based firm thought he was speaking to his boss, the CEO of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, instructing him to pay it within an hour, according to the company’s insurance firm, Euler Hermes Group who declined to name the victim companies.

Criminals are increasingly finding new ways to weaponise AI for malicious purposes, and whoever was behind this attack appears to have used AI based software to successfully mimic the German executive’s voice by phone.

Apparently the UK CEO “recognized his boss’ slight German accent and the melody of his voice on the phone,” said Rüdiger Kirsch, a fraud expert at Euler Hermes, a subsidiary of Allianz.

 

READ
Scientists discovered a way to read information directly from black holes

 

Several officials involved in the case said the voice spoofing attack is the first cybercrime they’ve heard of that used AI in this way and Euler Hermes, who covered the entire claim, added that they haven’t dealt with any other cyber crimes involving AI – yet. But as so called creative machines get better at generating synthetic content of all forms it’s only a matter of time before this type of attack becomes common place and a real problem.

“Scams using AI are a new challenge for companies,” said Kirsch, adding. “Traditional cybersecurity tools designed to keep hackers off corporate networks can’t spot spoofed voices yet even though several cybersecurity companies have developed products to detect so-called deepfakes.”

“At the moment it’s unclear whether or not this is the first attack using AI in this way or whether there have been other incidents that have gone unreported,” said Philipp Amann, head of strategy at Europol the European police agency. “In time it’s likely that hackers will use the technology if it makes their attacks more successful or profitable.”

 

READ
Scientists have created a decoder to turn brain activity into speech

 

“The attackers responsible for defrauding the British energy company called three times,” said Kirsch. “After the transfer of the $243,000 went through, the hackers called to say the parent company had transferred money to reimburse the UK firm. They then made a third call later that day, again impersonating the CEO, and asked for a second payment. Because the transfer reimbursing the funds hadn’t yet arrived and the third call was from an Austrian phone number, the executive became suspicious, and he didn’t make the second payment.”

“The money that was transferred to the Hungarian bank account was subsequently moved to Mexico and distributed to other locations. Investigators haven’t identified any suspects,” added Kirsch.

“It’s also unclear whether the attackers used bots to react to the victim’s questions or used text-to-voice based systems,” said Amann.

 

READ
MIT's new secure, low power chip helps create the "Internet of Secure Things"

 

“A few software companies offer services that can quickly impersonate voices and all it takes is a minute’s worth of audio to copy their voices,” said Bobby Filar, director of data science at Endgame, a cybersecurity company. “You don’t need to be a PhD in mathematics to use it.”

“You can’t go around and be silent the entire time. You’re going to run into situations like this where you expose information that you never thought could be used against you,” said Filar.

“Applying machine learning technology to spoof voices makes cybercrime easier,” said Irakli Beridze, head of the Centre on AI and Robotics at the United Nations Interregional Crime and Justice Research Institute.

 

READ
Scientists unveil yet another AI that converts your thoughts into sentences

 

The UN center is researching technologies to detect fake videos, which Beridze said could be an even more useful tool for hackers. In the case at the UK energy firm, an unfamiliar phone number finally aroused suspicions.

“Imagine a video call with [a CEO’s] voice, the facial expressions you’re familiar with. Then you wouldn’t have any doubts at all,” he said – and that technology too is also arriving quickly in the form of so called life-like digital humans, several of which I’ve discussed and shown off recently that are all increasingly impressive, the net result of which means that this attack will be just the beginning of an entirely new type of spoofing attacks and you can only expect things to get crazier from here.

Related Posts

Leave a comment

Get your FREE! XPU Introduction to Exponential Thinking Course now. No registration, no catches, just awesome knowledge.GET FUTURED
+

Explore More!

Explore 1000's of articles about our exponential future, 1000's of pages of insights, 1000's of videos, and dig into 100's of exponential technologies. Subscribe to get your no-nonsense briefing on all the biggest stories in exponential technology and science.

Awesome! You're now subscribed.

Pin It on Pinterest

Share This