Scroll Top

IBM injected a virus into a neural net to create an undetectable cyberweapon

WHY THIS MATTERS IN BRIEF

Even today’s cutting edge cybersecurity products have no defence against virus laden, weaponised neural networks.

 

It’s been a very busy few weeks in the field of Artificial Intelligence (AI) and neural networks with the creation of the world’s first DNA neural network, and the world’s first 3D printed physical neural network, but now IBM, hot on the heels of DeepMind’s announcement about the creation of the world’s first AGI, and the news that a supercomputer built a superior neural network  in just a day, have unveiled yet another world first.

 

See also
Google goes all in on building AI's that build new AI's

 

You may think today’s malware is bad, but AI may soon make malicious software nearly impossible to detect as it waits for just the right person to sit in front of the computer. That’s according to work by a group of researchers with IBM who “inserted viruses into AI neural nets” which they revealed at the BlackHat Cybersecurity Conference in Las Vegas last week. And that’s before we discuss the impact that autonomous defensive and offensive AI robo-hackers, like the ones used by the Pentagon to secure its critical systems that “hack and patch” systems 100 million times faster than humans, and self-coding AI’s like Microsoft’s DeepCoder and Google’s Bayou, that scavenge code to build new programs could help change the cybersecurity game again.

Here’s how the new smart malware works and why it’s such a large and significant threat to, well, just about everyone who uses a computing device. Traditional virus catching software finds malicious code on your computer by matching it to a stored library of malware, and more sophisticated anti-virus tools can deduce that unknown code is malware because it targets sensitive data. Advanced defensive software creates virtual environments, called sandboxes, in which to open suspicious file payloads to see how they act.

 

See also
This AI can tell what you're typing just by listening

 

Now enter deep neural nets, or DNNs, which defy easy probing and exploration even by advanced human analysts, much less by software. In sort of the same way that the inner works of the mind are a mystery, it’s nearly impossible to understand how neural networks actually work to produce the outputs that they do.

A neural network has three layers. The first layer receives inputs from the outside world. Those could be keyboard commands, sensed images, or something else. The second layer is the indecipherable one. Called the hidden layer, it’s where the network trains itself to do something with the input it received from the first layer. The final layer is the output, the end result of the process. Because neural networks train themselves, it’s impossible to really see how they arrive at their conclusions.

The opaque nature of DNNs is one reason why policy, intelligence, and defense leaders have a lot of reservations about employing them in life-or-death situations. After all, it’s hard for a commander to explain the decision to drop a bomb on a target based on a “black box process” that no one can explain, a problem that DARPA, the US military’s bleeding edge research arm, is working on trying to solve. But that said neural networks are becoming increasingly popular in commercial and civilian settings such as market forecasting because they work so well.

 

See also
China touts an AI that can design its own hypersonic weapons

 

The IBM researchers say they figured out a way to weaponise that hidden layer, and that presents a huge new threat, although there is hope that a new IBM neural network watermarking tool, that could be used to prevent both the plagiarism and sabotage of neural networks, could provide some form of a defence.

“It’s going to be very difficult to figure out what it is targeting, when it will target, and the malicious code,” said Jiyong Jang, one of the researchers on the project.

“The complex decision-making process of a [deep neural net] model is encoded in the hidden layer. A conventional virus scanner can’t identify the intended targets and a sandbox can’t trigger its malicious behavior to see how it works,” added head researcher Marc Ph. Stoecklin.

 

See also
Facebook 3D photos sources depth information straight from your camera

 

That’s because the program needs a key to open it up, a series of values that matches an internal code. The IBM team decided to make the key a specific person’s face, or more precisely, the set of data generated by a facial-recognition algorithm. They concealed it in applications that don’t trigger a response from antivirus programs, applications like the ones that run the camera, for instance. The neural network will only produce the key when the face in view matches the face it is expecting. With the camera under its control, the DNN sits quietly, waiting and watching for the right person. When that person’s face appears before the computer, the DNN uses the key to decrypt the malware and launch the attack.

And face data is just one kind of trigger, the team said. Audio and other means could also be used. The world of cyber warfare, and the game of cat and mouse, will likely be a war without end, and hackers and nation states might just have gotten themselves the cybersecurity equivalent of the nuclear bomb… and that could be under estimating the threat. Fun times…

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This