Skip to main contentSkip to navigationSkip to navigation
Monika Bickert, head of global policy management for Facebook, made the announcement on Monday.
Monika Bickert, head of global policy management for Facebook, made the announcement on Monday. Photograph: Rex/Shutterstock
Monika Bickert, head of global policy management for Facebook, made the announcement on Monday. Photograph: Rex/Shutterstock

Facebook bans 'deepfake' videos in run-up to US election

This article is more than 4 years old

Critics say policy does not cover ‘shallow fakes’ – videos made using conventional editing tools

Facebook has announced a new policy banning AI-manipulated “deepfake” videos that are likely to mislead viewers into thinking someone “said words that they did not actually say”, as the social network prepares for the 2020 US election.

But the policy explicitly covers only misinformation produced using AI, meaning “shallow fakes” – videos made using conventional editing tools – though frequently just as misleading, are still allowed on the platform.

The new policy, announced on Monday by Monika Bickert, Facebook’s head of global policy management, will result in the removal of misleading video from Facebook and Instagram if it meets two criteria:

  • “It has been edited or synthesised … in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”

  • “It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

To date, there have been no major examples of content that would break such rules. Some news organisations, including the BBC, New York Times and Buzzfeed have made their own “deepfake” videos, ostensibly to spread awareness about the techniques. Those videos, while of varying quality, have all contained clear statements that they are fake.

The most damaging examples of manipulated media in recent years have tended to be created using simple video-editing tools. During the UK election, the Conservative party came under fire for a video edited to make it appear as though the Labour MP Keir Starmer had no answer to a question about Brexit. Facebook at the time confirmed the video satisfied its policies on misinformation, and since there was no AI involved in its creation, it would still be allowed today.

In the US, a doctored video that seemed to show the House speaker, Nancy Pelosi, slurring her way through a speech was similarly allowed by Facebook. The video, spread by Trump supporters including Rudy Giuliani, was edited, but not using any technique more complex than slowing down the raw footage and pitch-shifting the audio.

Real v fake: debunking the 'drunk' Nancy Pelosi footage - video

The removal policy is just one branch of Facebook’s attempt to fight misinformation, Bickert argued. “Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages,” she said. “If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in news feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.

“This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”

The company also has a separate policy that allows any content that breaks its other rules to remain online if it is judged “newsworthy” – and that all content posted by politicians is automatically seen as such.

“If someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm,” said Nick Clegg, Facebook’s vice-president of global affairs and communications, when he introduced the policy last September. “From now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” That policy means that even an AI-created deepfake video expressly intended to mislead could still remain on the social network, if it was posted by a politician.

Facebook did not give a reason as to why it limited its policy exclusively to those videos manipulated using AI tools, but it is likely that the company wanted to avoid putting itself in a situation where it had to make subjective decisions about intent or truth. Facebook has struggled to settle on a policy about what to do about deepfakes for a number of years, with the company publicly acknowledging the potential damage such videos could inflict, while also standing by a prior decision – thought to be a direct policy of teh founder, Mark Zuckerberg – to avoid ruling on whether or not content on the site is true or false.

Most viewed

Most viewed