Scroll Top

To beat Deepfakes researchers built a smarter camera

WHY THIS MATTERS IN BRIEF

Deepfakes and synthetic media are here to stay and they’re getting better – better at spawning misinformation and undermining democracy.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

One of the most difficult things about detecting manipulated deepfakes and photos is that digital photo files aren’t coded to be tamper evident. But researchers from New York University, as well as other researchers and start ups around the world, are starting to develop strategies that make it easier to tell if a photo has been altered, as well as finding new ways to prevent your likeness from being deepfaked, opening up a potential new front in the war on fakery.

 

See also
Facebook’s DeepFovea tool reduces VR rendering resources by upto 99 percent

 

Forensic analysts have been able to identify some digital characteristics they can use to detect meddling, but these indicators don’t always paint a reliable picture of whatever digital manipulations a photo has undergone. And many common types of “post-processing,” like file compression for uploading and sharing photos online, strip away these clues anyway.

But what if that tamper-resistant seal originated from the camera that took the original photo itself? The NYU team demonstrates that you could adapt the signal processors inside – whether it’s a fancy DSLR or a regular smartphone camera – so they essentially place watermarks in each photo’s code. The researchers propose training a neural network to power the photo development process that happens inside cameras, so as the sensors are interpreting the light hitting the lens and turning it into a high quality image, the neural network is also trained to mark the file with indelible indicators that can be checked later, if needed, by forensic analysts.

 

See also
AI is coming to help Hollywood pick future blockbusters

 

“People are still not thinking about security – you have to go close to the source where the image is captured,” says Nasir Memon, one of the project researchers from NYU who specializes in multimedia security and forensics. “So what we’re doing in this work is we are creating an image which is forensics-friendly, which will allow better forensic analysis than a typical image. It’s a proactive approach rather than just creating images for their visual quality and then hoping that forensics techniques work after the fact.”

The main thing consumers expect from cameras is ever-improving image quality and fidelity. So one main focus of the project was showing that incorporating machine learning into the image signal processing that goes on inside of a camera doesn’t visibly detract from photo quality as it paves the way for tamper-resistant elements. And adding these features within the image-generation hardware itself means that by the time files are being stored in the camera’s SD card or other memory – where they’re potentially at risk of manipulation – they are already imbued with their tamper-evident seals.

 

See also
So long Asimov's Laws, say hello to the 23 Laws of Robotics

 

The researchers mainly insert their watermarks into certain color frequencies, so they will persist through typical post-processing, like compression or brightness adjustments, but show modification if the content of an image is altered. Overall, the forensic friendly additions improved image manipulation detection accuracy from about 45 percent to more than 90 percent.

Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide. And being able to reliably identify them is crucial to combatting false narratives. The NYU researchers, who will present their work at the June IEEE International Conference on Computer Vision and Pattern Recognition in Long Beach, California, emphasize that there is no panacea for dealing with the problem. They suggest that foundational watermarking techniques like theirs would be most effective when used in combination with other methods for spotting fakes and forgeries.

 

See also
New breakthrough polymer could charge electric cars in seconds

 

“A lot of the research interest is in developing techniques to use machine learning to detect if something is real or fake,” NYU’s Memon says. “That’s definitely something that needs to be done – we need to develop techniques to detect fake and real images – but it’s also a cat and mouse game. Many of the techniques that you develop will eventually be circumvented by reasonably well-equipped, reasonably smart adversaries.”

As with any security technology, though, the same could potentially be said about watermarking technology, even when the anti-tampering features are inserted during image creation.

“As the research and industrial communities consider this technology, I do think they should be wary of potential risks posed by anti-forensic attacks and adversarial machine learning,” says Matthew Stamm, an information forensics researcher at Drexel University. “This technology is a very interesting and creative approach to watermark-based image security and opens up new ways for researchers to design watermarks and other security measures that could be added into images by a camera. But it’s feasible that an attacker might be able to create a deep learning network to remove these security artifacts, allow an image to be modified or falsified, then re-insert the security artifacts afterward.”

 

See also
FDA approves use of world's first neuro controlled exoskeleton

 

Stamm also points out that it’s important to consider the privacy implications of adding a watermark to digital images. Depending on how such a system is implemented, the tamper-evident traces could create a way of tracking photo files or fingerprinting cameras, something that would impact people’s privacy. There are already other ways to do this, though, since every camera has unique sensor imperfections that can be used for identification. And watermarking schemes could prioritize privacy preserving approaches.

For forensic watermarking to really make an impact on curbing deepfakes though it would also need to work on video, something the researchers say they haven’t broached yet, but that would be theoretically possible.

 

See also
Milk and dairy products without the cow, Perfect Day re-invents dairy

 

And even just getting manufacturers to integrate such protections for still images would be challenging – the perennial hurdle of pushing security features. There would need to be a clear economic incentive for camera-makers to overhaul their image signal processors.

Drexel’s Stamm points out though that watermarking technology could make a big impact even if it is only implemented in certain cameras used in high-sensitivity situations, like cameras used to take crime scene photos or used by news outlets. And as deepfakes become an increasingly ubiquitous threat, the motivation for universal adoption may be there sooner than you think.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This