Sign In
PUBLIC RELATIONS
Wednesday 9th October 2019

Why deepfakes should be on the reputational radar

Concern is widespread that artificially generated ‘deepfake’ videos pose a major potential problem for those targeted, be they companies, CEOs, celebrities, academics and commentators, or politicians.

A new study of 14,678 deepfake videos by cybersecurity company Deeptrace suggests otherwise. Deepfakes may generate millions of views, yet the great majority (96%) are pornographic and have little wider societal impact.


Of those that are not pornographic, such as Chinese deepfake face-swapping app Zao or a recent spoof of former Italian PM Matteo Renzi, most are designed to entertain. Only a tiny minority have been expressly designed to sow misinformation or disinformation, or to damage reputation.

The reputational threat of deepfakes


This may change all too soon. Deepfakes are increasingly realistic, freely available, and easy to make. Artificial voice company Lyrebird promises it can create a digital voice that sounds like you in a few minutes (even if my voice apparently proved less than straight-forward.)

It is surely only a matter of time before we see more regular instances of deepfakes damaging – directly or indirectly – companies, governments and individuals through false or misleading news stories, hoaxes and reputational attacks.

A recent example: controversial Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves talking in his voice, forcing him to threaten legal action. The simulator has since been taken offline.
Jordan Peterson audio deepfake

In another case a political private secretary in the Malaysia government was arrested over a video allegedly showing him having illegal gay sex with the country’s minister of economic affairs. The country’s leader responded by saying the video was ‘cooked up’, but it remains unproven whether the video was manipulated.

Reputational risks of deepfakes for companies include:

  • A fake CEO town hall video regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller
  • The voice of a politician is used to manipulate a senior director into discussing allegations of corporate fraud
  • A fake recording of two executive board directors discussing the sexual habits of a colleague is used to blackmail the company
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.

Spread over the internet and social media and excavating distrust in institutions and deep geo-political tensions, the risks of malevolent deepfakes are only now starting to emerge.

While the likelihood of a deepfake attack remains low in the short-term, and impact remains hard to quantify, every organisation would be wise to start considering what it may mean for its name and image.



My next post will explore how companies can prepare to manage the reputational risks of deepfakes.

Deepfakes are only one form of AI, though arguably pose the most direct reputational risk.

I am crowdsourcing examples of AI risks of all kinds that have spilled into the public domain via my AI/machine learning controversy, incident and crisis database. Constructive contributions are welcome.

Photo by John Noonan on Unsplash