Scroll Top

OpenAI releases its new DeepFake detector

WHY THIS MATTERS IN BRIEF

Deepfakes are polarising society and being used to destroy democracy and so far very few methods to detect them actually work well.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

As experts warn that images, audio and video generated by Artificial Intelligence (AI) could influence the fall elections, OpenAI is (yet again) releasing a tool designed to detect content created by its own popular image generator, DALL-E. But the prominent AI start-up acknowledges that this tool is only a small part of what will be needed to fight so-called deepfakes in the months and years to come.

 

See also
Step inside the SpaceX capsule that will take astronauts to the ISS

 

On Tuesday, OpenAI said it would share its new deepfake detector with a small group of disinformation researchers so they could test the tool in real-world situations and help pinpoint ways it could be improved.

“This is to kick-start new research,” said Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy. “That is really needed.”

 

The Misinformation Apocalypse, by Matthew Griffin

 

OpenAI said its new detector could correctly identify 98.8 percent of images created by DALL-E 3, the latest version of its image generator. But the company said the tool was not designed to detect images produced by other popular generators like Midjourney and Stability.

 

See also
Hackers use infra red CCTV cameras to steal data from air gapped systems

 

Because this kind of deepfake detector is driven by probabilities, it can never be perfect. So, like many other companies, nonprofits and academic labs, OpenAI is working to fight the problem in other ways.

Like the tech giants Google and Meta, the company is joining the steering committee for the Coalition for Content Provenance and Authenticity, or C2PA, an effort to develop credentials for digital content. The C2PA standard is a kind of “nutrition label” for images, videos, audio clips and other files that shows when and how they were produced or altered — including with AI.

OpenAI also said it was developing ways of watermarking AI generated content so they could easily be identified in the moment, and that the company hopes to make these watermarks difficult to remove even though so far watermarks have proven quite easy to remove

 

See also
Blockchain entrepreneurs are helping utilities re-invent the powergrid

 

Anchored by companies like OpenAI, Google and Meta, the AI industry is facing increasing pressure to account for the content its products make. Experts are calling on the industry to prevent users from generating misleading and malicious material — and to offer ways of tracing its origin and distribution.

In a year stacked with major elections around the world, calls for ways to monitor the lineage of AI content are growing more desperate. In recent months, audio and imagery have already affected political campaigning and voting in places including India, Indonesia, Slovakia, and Taiwan.

OpenAI’s new deepfake detector may help stem the problem, but it won’t solve it. As Agarwal put it: In the fight against deepfakes, “there is no silver bullet.”

Related Posts

Leave a comment

FREE! 2025 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This