Join CIPR
Computer generated image of a bald cyborg with a human face touching holographic dials. In front of the cyborg is the face of a woman with bobbed hair who is having her face duplicated by computer mapping
Devrimb / iStock
TECHNOLOGY
Tuesday 4th June 2024

Political campaigns need to be proactive against deepfakes this election season

Labour's Wes Streeting is the latest politician to call out a deepfake video but what action can political campaigns take against this dangerous new reality?

Nefarious audio and video content made to trick voters present a clear and present danger to free and fair elections. Deepfakes technology has advanced at a rapid speed as computer processing technology has become faster and cheaper and audio and video editing software has become universally available.

These advances in technology have already been deployed in the US primary elections with a fake robocall with a spoofed voice of Joe Biden nefariously made to confuse voters on what day the election was held all to suppress voter turnout amongst a targeted group of voters. With the UK general election having been called, and the US presidential election coming into view, we need to be more aware than ever of the dangers of deepfakes.

Political campaigns cannot stop the advancement of technology, instead they should embrace the new reality of modern campaigns and how dirty tricks have also evolved. Nonetheless, there are steps that every political campaigns (and concerned citizens) can take to minimise the chances of being thrown off course and being duped by a deepfake.

Deepfakes are realistic looking content created without consent. They can employ voices, videos, images to create online content to deceive people. They can cause significant harmful impacts on individuals being used to blackmail, harass, commit fraud, gain revenge and other purposes. As artificial intelligence (AI) advances, the quality of the deepfakes increases.

It is not that the technology is inherently bad. Some businesses have recently experimented with using AI to send personalised messages to their staff.

Tricking voters

Similarly, some politicians have used the technology for light-hearted purposes such as creating online games involving the candidates. But we have already seen examples of deepfakes being used in elections to try and trick voters. Joe Biden, Keir Starmer and Sadiq Khan has already been the victims of deepfakes. [Just as this blog was being published, Labour’s Wes Streeting has called out a 'fake' video circulating on social media which claimed to show him calling Diane Abbott a "silly woman".]

Examples from all parts of the world are increasing in frequency – among them Indonesia and France.

It is not just about video but audio as well. What could be better than a slightly poor quality ‘illicitly recorded’ phone conversation or comments from an event where a candidate says something outrageous? The more amateur the sound quality, the more damage it may do.

And with elections across the world, not least the US, EU, and UK, taking place this year, there is a focus on what can be done about the danger.

According to a new survey from BCS (The Chartered Institute for IT) in the UK, the influence of deepfakes on the UK general election is a concern for most tech experts: 65% of IT professionals polled said they feared AI generated fakes would affect the result.

But they also think the parties themselves will be involved “92% of technologists said political parties should agree to publicise when and how they are using AI in their campaigns.” This suggests that they do not entirely trust politicians either. According to another survey, 70% of UK MPs fear deepfakes.

Acting against deepfakes

There are regulatory and legislative solutions being suggested to deal with deepfakes but here and now there are actions that campaigns can take.

  1. Avoid the void – problems arise when there is a space to fill. The more content that a campaign has, the more it can cover a wide range of topics, the less space there is for a deepfake to fill a void.
     
  2. Deal with controversy – rather than failing to have a position on a difficult issue of the day, a campaign needs to tackle it. Again, this prevents a deepfake from being able to exploit an issue where there are firm views but political silence.
     
  3. Consistency of approach – moving around too much on an issue opens space for deepfakes to exploit. The more an announcement looks out of the ordinary, away from the usual, the easier it will be to expose and challenge a deepfake. 
     
  4. Establish a dedicated unit for rapid response. All campaigns should have a team responsible for looking to monitor and correct any and all false and misleading information. Whether its coming from AI that created a deepfake, or simply a misquoted statement and dealing with them. The more that responsibility is vague or unattributed, the less coherent and speedy the necessary response will be.
     
  5. Call it out as soon as possible – the dedicated unit needs to have access to the latest detection software and be staffed by a team of experts. Critically, the deepfake needs to be challenged as soon as possible to prevent it from gaining traction. Sometimes, media relations advice will say do not publicise or give airtime to an opponent’s argument as it only raises its profile. But deepfakes are different, they need to be warned against.
     
  6. Cross candidate / party consensus – as much as possible there should be a commonality of approach on deepfakes. All candidates and campaigns have an interest in tackling deepfakes. The more that some think they will gain through their distribution, the more likely they are to have an impact.
     
  7. All candidates should take responsibility – dealing with deepfakes should not be seen as just the responsibility of a central campaign team. Every candidate runs a risk so there needs to be a local as well as a national focus.
     
  8. Inform the media – journalists are aware of their responsibilities when it is dealing with deepfakes and will welcome knowing when examples are found.
     
  9. Work with social media channels – campaigns should set up discussions with them in advance so that action can be immediate if examples are found. Establishing working protocols will help with speed.
     
  10. Candidates must control their search results as well as the narrative around fakes and rumours. Remember today’s deepfakes and smear campaigns are known for dropping false and misleading information late in the campaign cycle, often close to election day. A team needs to think about whether it can get a credible newspaper to run an article about the accuracy. How fast can the campaign issue a statement and post it on to their website? Will enough voters even see the response? Campaigns need to be prepared to run search ads to direct curious citizens as well as contextual ads based on keywords around the deepfake to inform people to “be aware”.

Fact checking

The internet has taught us to always run into the fire instead of away from it. These attacks are salacious enough to cause news stories about these tactics as well as spread organically by old-fashion word of mouth. Folks will be searching the gossiped rumour on their phone to learn more. You must, therefore, think about what the search results look like. 

Search engine optimisation (SEO) matters to knock down misleading information. Is there a credible place that will be covering the campaign’s late breaking rumours?  If there is not you may need to create your own campaign website similar to Snopes, FactChecker, Politifacts etc.  

When these types of services did not exist to quickly dismiss the rumours and misleading propaganda in Ukraine, young students created their own website called StopFake.org to become the transparent hub and debunk the flurry of rumours and misinformation with credible hyperlinked sourced facts. This concept is not new either. In 2008, Barack Obama’s campaign created FighttheSmears.com, a website to address all of the rumours. This fact-based credible website was controlled by the campaign and indexed by Google and Yahoo on their top page of search results. 

Unfortunately, the spreading of lies has become more advanced with using technology that morphs candidates’ voice and facial expressions. 

Deepfakes are the new reality, and their impact could bring major political harm and undermine democracy. 

Campaigns must not hide their heads or pretend that this technology isn’t here and does not exist. All of us have a responsibility to take steps to take action against false and misleading fake advertisements from nefarious operators trying to cause chaos and sow discontent, cause confusion, or supress voters. If that responsibility is not embraced, then we will all suffer the consequences.

Scott Goodstein was the external online director for Barack Obama’s 2008 campaign and was in charge of the campaign’s social media platforms, mobile technology, and lifestyle marketing. He was a lead digital strategist on Bernie Sanders’ 2016 campaign and is the founder of CatalystCampaigns.com.

Dr Stuart Thomson is a public affairs and communications consultant. Listen to Stuart's podcastThe Public Affairs in Practice.