Seal of AI-pproval

System Could Derail Video Fakes, Deep or Shallow

In the early days of Russia’s 2022 invasion of Ukraine, Ukrainian President Volodymyr Zelenskyy appeared in a video telling his soldiers to lay down their arms and surrender. At least, that’s what it looked like.

The clip was a high-tech deception, illustrating a danger that University of Maryland researchers are fighting with a new system designed to ferret out video and audio that have been altered to create a “deepfake” or a harder-to-spot “shallowfake,” which changes only a few keywords or images.

“The gap between a generated video and real video is getting smaller,” says computer science Assistant Professor Nirupam Roy, who developed the new TalkLock system with Ph.D. student Irtaza Shahid. “There’s a good chance that in three or four years, the detection method used now will be impossible.”

To “lock” their talk, speakers display an ever-changing QR code on a phone or other screen alongside themselves as they’re recorded on video. The system “listens,” and embeds elements of the speech in the code. Viewers can later scan the code to verify authenticity of any part of the video, even if it’s posted in a different format or on a different platform.

The video requirement means TalkLock isn’t usable in many situations, Roy says, but future versions could replace visible QR codes with audio signals beyond human hearing to broaden its applicability.

0 Comments

Leave a Reply

* indicates a required field