You can now try the “world’s most dangerous” fake news AI for yourself


A basic version of the “world’s most advanced, and dangerous, fake news generator” is now online for you to try for yourself.


Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

This spring, the Elon Musk founded AI research lab OpenAI made a splash with an Artificial Intelligence (AI) system that generates text – a system that was so good at generating convincing and realistic text, such as articles, poems, and Fake News that the secrets behind how it works were deemed “too dangerous to release.”


Infographic: 3D Printing in schools, and preparing children for the future


Now though, a couple of months on, the public has a chance to give it a try, at least, a limited and dumbed version of it, and I’d strongly suggest you give it a whirl like Ollie, an English teacher in the UK recently did when I showed it off to teachers at a school near Reading during one of my Future of Education presentations.

Initially, OpenAI released an extremely restricted version of the system, citing concerns that it’d be abused and now they’ve released a more powerful version, although still significantly limited compared to the whole thing, and you can check it out for yourself.

The way it works is amazingly simple. A user gives the system, called GPT-2, a prompt — a few words, a snippet of text, a passage from an article, what have you. The system has been trained, on data drawn from the internet, to “predict” the next words of the passage — meaning the AI will turn your prompt into a news article, a short story, or a poem.


Hunting terrorists and preventing suicides, inside Zuckerberg's plan for AI


The results can be quite sophisticated. When I tested it, I fed GPT-2 the beginnings of stories about snowstorms in the Northwest, about college students, and about GPT-2 itself. The system then took it from there, inventing imaginary scientists to quote and imaginary organizations to cite, and it even enthused about the rapid progress of AI.

OpenAI initially decided not to release the full system to the public, out of fears it could be used by malicious actors to swamp us all with fake news. Instead, so instead they released smaller and less capable versions — a staggered rollout that OpenAI hopes will allow researchers to explore the system and learn from it, while still keeping the potential risks at bay.


This AI relies on human-like memory to create songs from lyrics


AI is getting more sophisticated — and that’s a big deal. It has the potential to assist us in tackling some of the biggest problems of our day, from drug development to clean energy. But researchers worry it can have unintended consequences, increase inequality, and, when systems get powerful enough, even pose real danger. We’re still figuring out how to balance AI’s benefits against its potential hazards.

Related Posts

Leave a comment

Subscribe To Our Newsletter

Join our mailing list to receive the latest futuristic news and updates from Matt and the team.


Thanks, you've successfully subscribed!

Pin It on Pinterest

Share This