Scroll Top

MIT tests the future of cybersecurity, pits AI’s against AI’s

article_ai_futurist_cybersecurity

WHY THIS MATTERS IN BRIEF

In the future cybersecurity will be almost fully automated using AI, and many say we’ve already entered into a new AI fuelled digital arms race.

 

Hot on the heels of DARPA’s Capture the Flag cybersecurity competition that they held last year, where thirteen of the world’s most advanced cybersecurity offensive and defensive Artificial Intelligence (AI) agents tried to hack each other while fixing their own vulnerabilities, MIT is now pioneering their own cybersecurity challenge, and they’re upping the table stakes.

 

See also
Snowdens smartphone app turns security on its head to protect “people at risk”

 

The team at MIT have created a five month long competition, in partnership with the data scientist community Kaggle, that will see offensive and defensive AI algorithms go at each other all day, and all night, until one wins – and whatever the result is it’s fair to say that this will inevitably be the future of cybersecurity and cyberwarfare, and, thanks to British cybersecurity firm Darktrace, automated cyberdefence, in part at least, is already a reality.

The competition will pit different researchers’ algorithms against one another in attempts to confuse and trick each other, with the hope being that this combat will yield fresh insights into how to harden machine learning systems against future attacks.

“It’s a brilliant idea to catalyse research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,” says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning.

The contest will have three components.

One challenge will involve simply trying to confuse a machine learning system so that it doesn’t work properly. Another will involve trying to force a system to classify something incorrectly. And a third will involve developing the most robust defences, and all of the results will be presented at a major AI conference later this year.

 

See also
ORNL and UPS show off first of a kind 20Kw wireless EV charging system

 

Machine learning, and deep learning, in particular, are rapidly becoming an indispensable tool in many industries – the technology involves feeding data into a special kind of computer program, specifying a particular outcome, and having a machine develop its own algorithm to achieve the outcome. Deep learning does this by tweaking the parameters of a huge, interconnected web of mathematically simulated neurons.

It’s long been known that machine learning systems can be fooled, after all. All that spam you get into your inbox, for example, doesn’t happen just by accident, it’s all been designed and tweaked by spammers who are trying to do their best to evade the modern algorithmic spam filters.

In recent years, however, researchers have shown that even the smartest algorithms can sometimes be misled in surprising ways. For example, deep learning algorithms with near human skill at recognising objects, such as faces, in images can be fooled by seemingly abstract or random images that exploit the low level patterns these algorithms look for.

“Adversarial machine learning is more difficult to study than conventional machine learning – it’s hard to tell if your attack is strong or if your defense is actually weak,” says Ian Goodfellow, a researcher at Google Brain, a division of Google dedicated to researching and applying machine learning, who helped arrange the contest.

 

See also
McLaren reveal their extreme 2050 shape shifting F1 race car concept

 

As machine learning becomes pervasive, the fear is that such attacks could be used for profit and mischief, and as more companies begin to rely on AI, and continue to embed it into their core systems and architectures, even though it’s still regarded as a black box, a criminal that figures out a way to trick, or fool, those AI’s could do real damage – both virtual and physical.

“Computer security is definitely moving toward machine learning,” Goodfellow says, “the bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.”

In theory, for example, criminals and terrorists might try to fool voice and facial recognition systems at airports, or even put up posters that fool the vision systems of self-driving cars, causing them to crash, and those are just three of the happy future scenarios that lie ahead of us all.

Kaggle, which was bought by Google in March, and which is now part of their Google Cloud Platform, has become an increasingly valuable breeding ground for algorithm development, and a hotbed for talented data scientists, many of whom are also helping to monitor every catch in the Pacific and create new algorithmic, and automated, hedge funds, something I discussed in Frankfurt at the start of the year. However, as far as this competition is concerned,  Goodfellow and another Google Brain researcher, Alexey Kurakin, submitted the idea for the challenge before the acquisition.

 

See also
DARPA’s hack proof code protects military systems from hackers

 

Benjamin Hamner, Kaggle’s cofounder and CTO, says he hopes the contest will draw attention to a looming problem.

“As machine learning becomes more widely used, understanding the issues and risks from adversarial learning becomes increasingly important,” he says, adding, “we believe that this research is best created and shared openly, instead of behind closed doors.”

Clune, meanwhile, says he is keen for the contest to test algorithms that supposedly can withstand attack.

“My money is on the networks continuing to be fooled for the foreseeable future,” he says.

As for my money? Well, last year Google tried to assess AI’s natural “killer instincts,” and ultimately they managed to demonstrate that more powerful AI’s were more likely to “kill” their weaker siblings, and as AI becomes more capable, and as computing becomes more powerful, with new chemicalphotonic, quantum and even DNA computing on the horizon, we’re inevitably heading down a new road where our fate, and the fate of the systems we rely on, will be at the mercy of these new automated offense and defense systems, and their offspring.

 

See also
Chicken nuggets grown in a lab from feathers for the first time

 

To top it off though last year some of the world’s top AI minds held a Doomsday Games to try to come up with worst case scenarios and solutions to a world that is increasingly bent on strapping AI into everything and the worrying part? Well, they were great at figuring out the worst case scenarios, but the solutions to overcoming those scenarios? Well, not so hot – in fact you could say it was positively icy. And one day, bearing in mind that some AI’s can already self-design, and evolve, and self-replicate themselves, and they’re also at the beginning of being able to self-code themselves, it’s also highly likely that, sooner rather than later, someone will release an advanced, autonomous AI “into the network” – something that Elon Musk is always keen on preaching about. And after that, well, all bets could be off.

As for me though I’m just chuffed I wrote a whole article on the automated future of AI based cybersecurity without mentioning Skynet – ooops! What did Brittany Spears say? I did it agin…

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This