Join CIPR
A black woman, who is sat on a sofa, laughs while holding a jacket in front of a mobile phone and ring light. In the background are a cushion and plants, books and a clock on a wooden shelving unit.
Jacob Wackerhausen / iStock
TECHNOLOGY
Tuesday 15th August 2023

PRs! Don’t panic, generative AI won’t replace your influencer strategies

Artificial intelligence may concern some public relations professionals but remember human connection sits at the heart of influencer marketing…

According to Sky News, 21% of organisations say they are already using AI tools like ChatGPT, and 27% are investing in training to upskill employees in AI tools and technologies. The greatest take up of AI, according to the survey, was within marketing. 

More than one-third (36 per cent) of PR professionals are using AI tools such as ChatGPT on a regular basis for tasks such as drafting press releases and media monitoring, up from one in seven (15 per cent) in February this year.  

Beyond the streamlining of processes in comms, PRs need to understand the potential impact of AI on all channels, internal and external – and that includes AI in those influencer campaigns often being led by comms teams. 

But realistically, can and should PR pros worry about embracing generative AI in their influencer strategies? 

The current role of generative AI in influencer marketing  

What has been seen so far within the industry is AI being used as a workflow tool, for example, helping to generate copy. This shouldn’t pose significant problems for brands or influencers, if used correctly by personalising copy generated to add a human touch and originality.  

Mint Mobile took this approach with a recent advert in which ChatGPT was tasked with writing the script in the voice of Ryan Reynolds. The smart advert coupled AI capabilities with Mint Mobile’s tone of voice, and Ryan Reynolds’ personality in an open and relatable way. In this case, OpenAI literally put together the words – just as it could for many tasks that could be automated to generate time efficiencies – while ensuring the resulting output was personalised with branded context and authentic delivery which brought it to life. 

Without adding this human touch, brands and creators risk putting out content that lacks in originality – which can in turn negatively impact audience engagement – or contains inaccurate information if fact checking is missing, posing a risk to a brand’s reputation. 

The importance of a human first approach within influencer marketing 

Influencer marketing delivers strong ROI for brands, largely due to influencers’ authenticity, relatability, and the trust they have built with their followers. Leveraging this human connection between influencers and their audiences is the foundation on which influencer marketing has been built. To put it simply: human connection sits at the heart of influencer marketing. 

This presents a problem for generative AI. Firstly, generative AI cannot replace a tastemaker’s genuine endorsement or recommendation – or at least not yet. Secondly, should creators adopt the same tactics, content could risk becoming formulaic and replicated, leading to campaign aims falling flat.  

Human touch is needed to stand out from the crowd and it’s no different when using generative AI. Right now, using word or image generators requires human enterprise, feeding AI tools with the right prompts to create engaging content, as seen in the Mint Mobile example where ChatGPT instructions were tailored to the talent and brand. Without this, brands could be accused of being unoriginal, or copying brands who have produced similar content. Stripping influencer marketing bare of the human element, which is at the heart of its success, risks dampening the appeal of influencing, and it would be a shame to lose the genuine human connections brands can build with consumers via influencers.  

Similarly, when it comes to influencer identification, use of generative AI must be layered with human checking and expert counsel. Data plays an important role when it comes to selecting talent for brand collaborations, using metrics such as engagement, sentiment and audience insights. While generative AI can support this data-led process, the chemistry between a brand and influencer must play a role in talent selection, and this requires a human-first approach.  

It’s also important to be conscious of generative AI's flaws, particularly its risk of perpetuating biases as a result of human learning. Brands must be attuned to this to ensure inclusivity in campaigns. 

That’s why we layer expert consultancy on top of metrics, where the tone and aesthetics of content produced by an influencer is incorporated into the process to identify talent that will spark the right emotional connection between a brand and audience. This offers brands an edge that cannot be replicated by AI.  

How can AI help achieve campaign goals? 

Generative AI isn’t going anywhere and a blended approach where both tactics co-exist may be the best route forward, harnessing the strengths of real influencers and AI software.  

There are ways both brands and creators can incorporate AI into their workflows. If guided with the right keywords and personalisation by influencers, there could be potential for ChatGPT to help script an influencer’s content. It could also offer up data-based insights on different sector or regional markets based on learnings of how audiences have previously behaved. Understanding who the consumer is, what they respond to and the type of content that engages them arms creators with greater intel to produce content that is more likely to perform well or open them up to new audiences. 

From a brand perspective, those campaigns where AI may be best placed to play a role would be those relevant to the space, such as a metaverse-based activation, a future-facing or AR campaign. 

The reputational risks of using AI in campaigns 

While regulation tries to catch-up with this fast-evolving technology, it’s key to understand the limitations and the ethical grey areas of AI.  

According to a study conducted by Provoke Media, 85% of communications professionals are concerned about the potential legal and ethical issues that generative AI technologies may give rise to in the communications industry in the future. Gartner has also predicted that by 2024, at least a dozen businesses will come under fire in the media and legal proceedings for unethical use of automation in marketing campaigns.  

Already, contentious debates have broken out within the space of music and art. Recently, a new song, purportedly by Drake and The Weeknd and named Heart on My Sleeve, went viral. However, the song had in fact been created using AI, mimicking the voices of the artists - and soon enough the video, which had gained over nine million views, had been removed from social platforms in response to claims by the artists’ record label, Universal Music Group. 

AI also has many limitations at present. Most notably, the potential to generate incorrect information. Integrity and transparency are two key examples of good public relations practice, and it’s important that these codes of conduct are stuck to. Solely using AI without the role of a human to fact check information risks overriding these professional standards.  

In the US, a bill introduced in June and titled the AI Disclosure Act of 2023, calls for any outputs generated by artificial intelligence to be accompanied by the following disclaimer: ''Disclaimer: this output has been generated by artificial intelligence.'' 

The key to adopting generative AI for influencer-led campaigns is to base activity on the same principles that have made influencer marketing successful: transparency, authenticity and trust. Whether appropriate regulation is in place or not, brands must use their ethical compass to navigate this new territory. If forgotten, AI can have reputationally damaging consequences, which is any PRs nightmare.   

Sarah Penny is content and research director at Influencer Intelligence.