Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes

This article was originally published on The Conversation
Neural networks can generate artificial representations of human faces, as well as realistic renderings of actual people. Shutterstock

Fake videos and doctored photographs, often based on events such as the Moon landing and supposed UFO appearances, have been the subject of fascination for decades.

Such imagery is often deep fake content, called so because it uses deep learning associated with neural networks and digital image processing.

Last week, Twitter revealed plans to introduce a new policy governing deep fake videos on its platform.

The company proposed it would warn users about deep fake content by flagging tweets with “synthetic or manipulated media”. Twitter says media may be removed in cases where it could lead to serious harm, but has stopped short of enforcing a strict removal stance. Users have until November 27 to provide feedback.

In adopting this warning-only approach towards deep fakes, the social media giant has shown poor judgement.

Why deep fakes are dangerous

With advances in computer science, deep fakes are becoming an increasingly powerful tool to deceive people using social media.

Deep fake clips of celebrities and politicians are realistic enough to trick users into making financial, political and personal decisions based on the fake testimony of others.

This Youtube clip featuring actor Bill Hader shows how realistic deep fake content can be.

Whether it’s a David Koch erectile dysfunction cream scam, an announcement by Donald Trump that AIDs has been eradicated, or a fake interview with Andrew Forrest leading to a finance scam, deep fakes present a serious risk to our ability to trust what we view online.




Read more:
People who spread deepfakes think their lies reveal a deeper truth


Social media companies have so far taken a sloppy approach to this threat. They have even promoted the use of photo algorithms letting users experiment with animated face masks, and provided tutorials on how to use editing programs.

Deep fake production is the professional version of this practice. At its worst, it can even threaten democracy.

Twitter’s latest draft policy on deep fakes sets a dangerous precedent. It allows social media platforms to handball away their responsibility to protect customers from manipulated videos and imagery.

Twitter should be just as accountable as television

It’s time social media giants such as Twitter started seeing themselves as the 21st century version of free-to-air television. With TV, there are clear guidelines about what cannot be broadcast.

Since 1992, Australians have been protected by the 1992 Broadcasting Services Act, ensuring what is shows in “fair and accurate coverage”. The act protects viewers in regards to the origin and authenticity of television content.

The same principles should apply to social media. Americans now spend more time on social media than they do watching television, and Australia isn’t far behind.

By suggesting they only need to flag tweets with deep fake content, Twitter’s proposed policy downplays the seriousness of the threat.

Sending the wrong message

Twitter’s draft policy is dangerous on two fronts.

Firstly, it suggests the company is somehow doing its part in protecting its users. In reality, Twitter’s decision is akin to watching a child struggle to swim in heavy surf, while nearby authorities wave a sign saying: “some waves may be hard to judge” – instead of actually helping.




Read more:
Lies, ‘fake news’ and cover-ups: how has it come to this in Western democracies?


Senior citizens and inexperienced social media users are particularly vulnerable to deep fakes. This is because they’re predisposed to trust online content that looks authentic.

The second reason Twitter’s proposition is dangerous is because social media trolls and sock puppet armies enjoy surprising online audiences. Sock puppets are specialists in deceiving users into believing they’re a single fake person (or multiple fake perople) by means of false posts and online identities.

Basically, content that has been signposted as deep fake will be exploited by people wanting to amplify its spread. It’s unrealistic to suppose this won’t happen.

If Twitter flags posts that are fake, yet leaves them up, the likely outcome will be a popularity surge in this content. As per social media algorithms, this means a greater number of fake videos and images will be “promoted” rather than retracted.

Twitter has an opportunity to take a leadership role in preventing the spread of deep fake content, by identifying and removing deep fakes from its platform. All major social media platforms have the responsibility to present a unified approach to the prevention and removal of manipulated and fake imagery.

The circulation of a Nancy Pelosi deep fake video earlier this year revealed social media’s inconsistency in the handling of deceitful imagery. YouTube removed the clip from its platform, Facebook flagged it as false, and Twitter let it remain.




Read more:
AI can now create fake porn, making revenge porn even more complicated


Twitter is in the business of helping users repost links and content as many times as possible. It creates profit by generating repeated referrals, commentary, and the acceptance of its content through promoted trends.

If deep fakes aren’t removed from Twitter, their growth will be exponential.

A looming threat

Early versions of such spurious content were relatively easy to spot. People in the first deep fake clips appeared unrealistic. Their eyes would’t blink and their facial gestures wouldn’t sync with the words being spoken.

There are also examples of harmless image manipulation. These include web apps on Snapchat and Facebook that let users alter their photos (usually selfies) to add backgrounds, or resemble characters such as cute animals.

However, this new generation of altered imagery is often hard to distinguish from reality. And as criminals and pranksters improve their production of deep fakes, the other side of this double-edged sword could swing at any time.

The Conversation

Dr David Cook is affiliated with Edith Cowan University as a lecturer in the School of Science, and is a Fellow of the Australian Computer Society