Join CIPR
A male fashion influencer holds a pair of grey trainers and grey t-shirt while sat looking towards a camera. Around him are a circular studio light, a paper screen, and a selection of t-shirts hanging up on a rail
amenic181 / iStock
TECHNOLOGY
Wednesday 27th September 2023

How to protect brand reputation in the new age of generative AI influencer marketing

With artificial intelligence having the potential to put consumers’ trust in companies at risk, it’s vital that communicators perform due diligence…

On the surface, generative AI software such as Chat GPT and DALL-E may appear to present an opportunity to drive time and cost efficiencies, including in the execution of influencer campaigns. While this can be tempting to businesses, adopting generative AI comes with both its benefits and risks, especially in terms of reputational and legal repercussions. Provoke Media reports that 85 per cent of communications professionals are concerned about the potential legal and ethical issues generative AI may give rise to in the communications industry in the future.

Responsible for protecting and growing positive brand perceptions, it is new territory that communications teams need to be well versed on. At a time when the full extent of the potential benefits and dangers of generative AI in influencer marketing is yet to be seen, and with little specific regulation in place, these teams can play an important role, advising businesses on where the reputational dangers lie to avoid losing favour with audiences. 

The good news is that many of the risks associated with generative AI’s current potential application in influencer campaigns pre-date the technology. Whilst regulation catches up, what has come before in terms of disclosures and regulation can be used as the north star to navigate this new territory and reap the benefits. 

Is generative AI a risk to consumer trust online and what does this mean for influencer marketing?

Transparency, authenticity and trust are the bedrock of influencer marketing and generative AI poses a risk to these values if not adopted with caution. Long has there been criticism of images posted on social media which have been manipulated without being labelled as such and the negative impact this can have on society, perpetuating unrealistic expectations. Now, as generative AI becomes widely accessible, this conversation will only proliferate. As brands experiment with generative AI in influencer marketing, it’s key that online trust is front of mind.

Tools, such as Midjourney and Dall-E, which have the capacity to generate imagery based on commands supplied by users, are becoming familiar territory. Take for example the picture of the Pope in a Balenciaga coat which did the rounds online before it was later realised to have been generated by AI.

Photo manipulation has long been an issue before AI entered into mainstream discourse, albeit advancements are making it more difficult to separate the real from the fake. While regulation catches up, brands must place transparency at the heart of influencer marketing campaigns if brands are considering utilising AI. From text-based involvement, generative AI should only be used as a framework that is then enhanced by humans. 

This is more difficult if looking at image-based content, but key to remember is that if audiences are being deceived into believing that what they are seeing is real and untampered with, the trust between brand and audience will be severely impacted.

In the same way image manipulation disclosure rules have been enforced within advertising, brands should apply the same ethical standards and best practice for influencer marketing content that uses generative AI. 

Brands must also perform due diligence when working with influencers, considering how creators may also be experimenting with this nascent technology. PRs should work with their legal teams to ensure contracts include the requirement for any content using generative AI is labelled as such, to mitigate the risk of content later being branded as misleading or inauthentic and ultimately, risking consumer trust.

It’s not just manipulated imagery that poses a risk either. Estimates have been made that as much as 90 per cent of online content could be generated or manipulated using AI by 2026 and this sparks an escalated cause for concern on misinformation, another risk factor that can deal a blow to trust and a brand’s reputation.

A case of human versus machine? Why generative AI must not be left unchecked 

While implementing messaging generated by AI offers brands time and cost savings, removing the human element from influencer marketing risks compromising a campaign’s authenticity, can damage reputation and audience engagement, as well as open brands up to legal risks. 

It’s well documented that a key flaw of generative AI is for information produced to contain significant inaccuracies, along with biases ingrained in the software because of human learning. PRs need to be acutely aware of these limitations, ensuring trust isn’t misplaced to protect a brand’s image. Rigor must be applied, with fact checking in place and deliberate processes to scrutinise the inclusivity of influencers selected for campaigns and the content itself. 

With so few guidelines as to the legality of AI-generated content, a real risk for both brands and influencers is that of intellectual property and copyright. A firm grasp on potential legal implications is also needed to avoid unethical and unlawful content at all costs. Generative art and music has already come under fire for using creators’ work or creating content in an artist’s style without permission. A recent example includes Universal Music Group’s claimsagainst the song “Heart on My Sleeve”, created using AI, mimicking the voices of Drake and The Weeknd. 

Again, plagiarism, and IP infringement isn’t new. For now, businesses are best off following regulations already in place for advertising and other creative industries, such as the music and photography sectors, where there is a degree of governance and guidelines.

Being poised to experiment with generative AI includes strategies for brand safety  

As brands look ahead to a world of broadening AI capabilities – and as world leaders work to enforce regulation on its applications – transparency will be brands’ best defence against reputational and legal backlash. 

Transparency, trust and authenticity are at the heart of influencer marketing’s success and are threatened by the widespread adoption of generative AI tools. As the adage goes, integrity is about doing the right thing even when no one is looking. While regulation is lacking, those brands that incorporate generative AI into strategies but also follow public relations and influencer marketing best practice, will mitigate risk and help avoid the inevitable pitfalls. 

Sarah Penny is content and research director at Influencer Intelligence.