Join CIPR
Two silhouettes of heads facing each other. They are made up of smaller heads. Binary numbers are in the background
wildpixel / iStock
PUBLIC RELATIONS
Tuesday 2nd May 2023

Deepfake – The new ‘weapon of reputational destruction’

Fake pictures of Pope Francis and Donald Trump may have raised a smile but the risks of such AI technology are huge…

I admit it, I was fooled. At the end of March photos appeared on Twitter of Pope Francis dressed in a white puffer jacket. 

I thought he looked quite cool, but I was a bit surprised. It wasn’t long before I found out that the photo was in fact a deepfake. At least I wasn’t the only one taken in.

Put simply, deepfake is the use of technology to create images – and in some instances audio communications – that are not real. 

The concept first came to prominence back in 2017 when pornographic videos featuring celebrities began to circulate.

Today the technology to create deepfakes uses AI and machine learning, both of which have seen great leaps in recent developments. These developments have meant the results are more convincing, harder to detect and available to more people than ever before. It doesn’t take a genius to see the crisis waiting to happen.

Deepfake technology can be used to fake photos, videos and audio material. There have been reports of deepfake WhatsApp voice messages. Basically, deepfake technology is able to take existing videos and photos of the targeted individual and use them to, in essence, predict how that face would look if it was saying words it had never actually articulated.  

At the start, this manipulation could be quite crude but recent developments have improved what can be done and are making detection that much more difficult.    

The fake pictures of Pope Francis were in fact created by a new AI tool called Midjourney which generates digital images from text-based prompts. The shots of Pope Francis – along with others of Donald Trump pumping iron in jail – may have raised a smile but in the interests of crisis scenario planning, we can think of numerous risks which could be brought about using this technology.

Tricking people by using headed notepaper or convincing emails has been with us for years. One use of this ruse has been to get house buyers to send funds to fake solicitor accounts. Think how much more convincing this would be with a video or audio call. 

Working from home has increased this type of risk for organisations as the ability for employees to check something they are not sure about with a colleague sitting nearby is often no longer available. We can easily imagine the scenario of a voice or even video message from a senior executive instructing an employee working from home to transfer money or send confidential information on to a third party. This is so-called CEO fraud but now with a new twist.

Deepfakes could be used to influence elections or bring about civil unrest. Fake videos of politicians appearing to make inflammatory statements could cause or deepen conflict. The technology could easily be used to spread disinformation and fake news. Whilst so many of us think seeing is believing, the possibilities in these areas are endless.

One crisis advisor has dubbed the potential use of deepfakes as a “weapon of reputational destruction” and it is not difficult to see why. A fake video of a chief executive trashing the company’s brand, voicing opinions contrary to the company’s stated values, claiming falsely to have broken the law or committed financial irregularities. Plenty of opportunities for the bad actors to destroy your brand. 

There are moves afoot to help us spot deepfakes more easily. Digital authenticity companies are emerging that offer to provide what are in essence online watermarks to show when a video has been endorsed by a credible source. The EU is looking to regulate AI technology with new laws that among other things would require synthetic media to be labelled as such. The UK is expected to follow suit.

A Coalition for Content Provenance (C2PA) has come about backed by the likes of the BBC, Sony and Microsoft. The C2PA aims to develop technical standards for the certifying the source and provenance of media content. According to the C2PA, “provenance refers to the basic, trustworthy facts about the origins of a piece of digital content (image, video, audio recording, document). It may include information such as who created it and how, when, and where it was created or edited.”

The C2PA has delivered Version 1.2 of its solution but work continues to ensure it operates across all formats. Once some sort of digital solution is found it would need a big communication programme to ensure we all know what to look for but how much global awareness could be created? And there is that old saying: “a lie can travel halfway around the world while the truth is still putting on its shoes.” Ironically, it is fake news that Mark Twain made this famous statement.  

Whilst we wait for some sort of technological solution to this problem there is a clear requirement for at the very least awareness training in organisations. Most organisations have regular cybersecurity training for employees, and it would be wise now to extend this to the issue of deepfakes. Protocols should be tightened in high-risk areas such as the handling of confidential information or transfer of money.

When it comes to brand and reputation the solution may well be familiar one. Making sure the channels of communication to your stakeholders and those who influence them should be open, two-way and above all trusted. Ask yourself the question if your organisation was faced with a damaging deepfake attack, would your word be taken over what is being seen and heard? And how quickly could you respond? Does your reputation have enough of a cushion of goodwill to ensure your stakeholders do not believe what would in the past have been the evidence of their own eyes?

Chris Tucker is chair of the CIPR Crisis Communications Network where this blog was first published.