Microsoft has developed a tool it claims can detect deepfakes – images and video created by artificial intelligence (AI) software that are so lifelike it can be almost impossible to detect the fraud.
While the tech giant is unlikely to be worried about tweens gender swapping on Snapchat – deepfakes can also used to spread fake news.
Social networks, in particular, are under pressure to do something about the tide of disinformation and conspiracy theory that is rife online at the moment.
“There is no question that disinformation is widespread,” writes Tom Burt, corporate vice president of customer security and trust and Eric Horvitz, chief scientific officer in a blog post on Microsoft’s site.
The tool – called Video Authenticator – analyzes stills and video and comes up with a percentage likelihood that what you’re looking at is a cleverly created fake.
With video, it can break it down frame by frame in real time.
“It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” Microsoft’s researchers wrote.
Video Authenticator was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, models for training and testing deepfake detection technologies.
Despite this, deepfakes are still rare, with traditional video editing techniques more usually used to produce misleading clips.