Scroll Top

New AI Fake News generator is too dangerous to release, say its creators

WHY THIS MATTERS IN BRIEF

It will not be long before we cannot differentiate the difference between real content and fake content, and that has a myriad of very serious implications, none of which we are prepared for.

 

There are several very scary AI’s in existence today, take for example the Russian Deadhand AI that will reign nuclear terror on the world if Russia’s leaders are killed in a “Decapitation strike,” or the autonomous Robo-Hacker AI called Mayhem that’s now in charge of protecting the Pentagon’s most critical systems, but now there’s another dangerous AI to add to the list after the creators of a revolutionary new Artificial Intelligence (AI) system that can write essays, scripts, stories and works of fiction which have been dubbed “Deepfakes for text,”  took the unusual step of not releasing their research publicly, for fear of potential misuse in, for example, using it to create Fake News, something I’ve written about extensively so far.

 

See also
DNA based cancer vaccine triggers immune attack on cancers in patients

 

OpenAI, an nonprofit research company backed by none other than Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator, along the same lines as the ones being used by researchers and hobbyists to write the next series in the Game of Thrones saga for fun, and scripts for adverts like the one IBM and Lexus just released, and movies like the one Wired recently produced.

The AI system is fed text, anything from a few words to a whole page, such as the first paragraph below about finding unicorns, and asked to write the next few sentences, which are in italics, based on its predictions of what should come next. And today, as you can see from its unicorn story, the system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses. and this is just one of hundreds of examples, all of which are, frankly, stunningly good for an AI.

 

See also
NIST unveils its top four quantum resistant encryption algorithms

 

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

 

See also
Doctors treating Covid-19 patients use Microsoft Hololens to keep them and their teams safe

 

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:

“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with “quotes” from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister’s spokesman.

One such, completely artificial, paragraph reads: “Asked to clarify the reports, a spokesman for May said: ‘The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen’s speech last week.’”

 

See also
US military and DARPA team up to develop tech to uncover fake news

 

From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI’s research director. The models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

The amount of data GPT2 was trained on directly affected its quality, giving it more knowledge of how to understand written text. It also led to the second, and incredibly useful and important, breakthrough. GPT2 is far more general purpose than previous AI programs. By structuring the text that is input, it can perform tasks including translation and summarisation, and pass simple reading comprehension tests, often performing as well or better than other AIs that have been built specifically for those tasks.

That quality, however, has also led OpenAI to go against its remit of pushing AI forward and keep GPT2 behind closed doors for the immediate future while it assesses what malicious users might be able to do with it.

 

See also
China is using autonomous 3D printers and robots to print a giant hydroelectric dam

 

“We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the charity’s head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

To show what that means, OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive – or negative – reviews of products. Spam and fake news are two other obvious potential downsides, as is the AI’s unfiltered nature. As it is trained on the internet, it is not hard to encourage it to generate bigoted text, conspiracy theories and so on, either.

Instead, the goal is to show what is possible to prepare the world for what will be mainstream in just a year or two’s time.

“I have a term for this. The escalator from hell,” Clark said. “It’s always bringing the technology down in cost and down in price. The rules by which you can control technology have fundamentally changed. We’re not saying we know the right thing to do here, we’re not laying down the line and saying ‘this is the way’ … We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This