Scroll Top

Researchers find a new way to train AI’s to protect them from deadly adversarial attacks

WHY THIS MATTERS IN BRIEF

As more of the world comes to rely on AI adversarial attacks pose a particular challenge to companies and people in critical industries who are responsible for the safety of people and systems.

 

Interested in the Exponential Future? Join our XPotential Community, future proof yourself with courses from our XPotential Academyconnect, watch a keynote, or browse my blog.

Adversarial attacks, where hackers use the way that Artificial Intelligence (AI) sees the world against itself in order to trick into doing things it shouldn’t, are getting more widespread – whether it’s tricking Tesla’s into speeding into oncoming traffic or tricking hospital systems into giving patients the all clear when in fact they have terminal cancer.

 

See also
First FAA approved drone flight drops donuts from heaven

 

Now, in response to this increasing threat, a team of researchers from University of Illinois (UI) have devise a new way to train AI’s in an attempt to try and protect them from these attacks, and as we rely on AI more and more in our daily lives it’s no exaggeration to say that their work could help save lives.

Today, most adversarial research focuses on image recognition systems, but deep learning based image reconstruction systems have also shown themselves to be vulnerable to adversarial attacks as well. This is particularly troubling in healthcare where the latter are often used to reconstruct medical images like CT or MRI scans from X-Ray data where a targeted adversarial attack could cause such a system to reconstruct a tumor in a scan where there isn’t one, or, as mentioned, vice versa giving doctors and patients false data which could be fatal.

 

See also
Futurist Keynote, Vienna: The Future of FMCG and Innovation, Henkel

 

As part of their research Bo Li and her colleagues at UI are have proposed a new method for training these deep learning systems so they’re more failproof and therefore trustworthy in what they call “safety critical scenarios.”

During their research they pitted the neural networks responsible for image reconstruction against other neural networks responsible for generating adversarial examples, in a style similar to GAN algorithms. Throughout iterative rounds the adversarial network attempts to fool the reconstruction network into producing things that aren’t part of the original data, or “ground truth.” The reconstruction network then continuously tweaks itself to avoid being fooled, which in turn makes it safer to use in the real world.

And as for the results when the researchers tested their adversarially trained neural network on two popular image data sets it was able to reconstruct the ground truth better than other neural networks that had been “fail proofed” with different methods.

 

See also
A major US led drone fleet is finally coming together in the Middle East

 

The results still aren’t perfect though obviously and this is a work in progress but it’s a start. The work will be presented next week at the International Conference on Machine Learning.

 

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This