Even the AI Behind Deepfakes Can’t Save Us From Being Duped


“The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” says Hany Farid, a digital forensics knowledgeable at UC Berkeley who’s engaged on deepfakes. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Video: Google

Google says it created movies that vary in high quality to enhance coaching of detection algorithms. Henry Ajder, a researcher at a UK firm referred to as Deeptrace Lab, which is gathering deepfakes and constructing its personal detection expertise, agrees that it’s helpful to have each good and poor deepfakes for coaching. Google additionally mentioned within the weblog put up saying the video dataset that it could add deepfakes over time to account for advances within the expertise.

“Google and Facebook are outsourcing the problem.”

Britt Paris, Data & Society

The quantity of effort being put into the event of deepfake detectors might sound to sign {that a} resolution is on the way in which. Researchers are engaged on automated methods for recognizing movies cast by hand in addition to utilizing AI. These detection instruments more and more rely, like deepfakes themselves, on machine studying and huge quantities of coaching information. Darpa, the analysis arm of the Defense Department, runs a program that funds researchers engaged on automated forgery detection tools; it is increasingly focused on deepfakes.

Much more deepfake training data should soon be available. Facebook and Microsoft are building another, larger dataset of deepfake videos, which the companies plan to release to AI researchers at a conference in December.

Sam Gregory, program director for Witness, a project that trains activists to use video evidence to expose wrongdoing, says the new deepfake videos will be useful to academic researchers. But he also warns that deepfakes shared in the wild are always likely to be more challenging to spot automatically, given how they may be compressed or remixed in ways that may trick even a well-trained detector.

As deepfakes improve, Gregory and others say it will be necessary for humans to investigate the origins of a video or inconsistencies—a shadow out of place or the incorrect weather for a particular location—that may be imperceptible to an algorithm.

“There is a future for [automated] detection as a partial solution,” Gregory says. He believes that technical solutions could help alert users and the media to deepfakes, but adds that people need to become more savvy about new possibilities for deception.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

Videos can, of course, also be manipulated to deceive without the use of AI. A report published last month by Data & Society, a nonprofit research group, notes that video manipulation already goes well beyond deepfakery. Simple modifications and edits can be just as effective in misleading people, and are harder to spot using automated tools. A recent example is the clip of video of Nancy Pelosi slowed down to make it appear as if she were slurring her words.

Source link

Previous 'Muffs' Singer Kim Shattuck Dead at 56
Next Little Kylie George in 'Mean Girls' 'Memba Her?!