Join CIPR
Computer generated image of a robotic hand typing on a keyboard
Guillaume / iStock
TECHNOLOGY
Tuesday 31st October 2023

Avoiding the fat-finger slip of AI

Why the human element is essential to prevent AI errors in marketing campaigns…

Most of us have experienced the awkwardness of making a fat-finger slip. But usually the consequences are minimal, like inputting the wrong figure into a spreadsheet.

Unfortunately, in some instances, fat-finger slips can be serious. So as we start using AI to generate content for PR campaigns, how can we avoid potentially negative consequences?

The concept of a fat-finger slip first originated in the stock market, and there have been numerous examples of incorrect inputting leading to transactions at unusually high or low prices.

The problem is that a single trading error can be replicated by algorithms across the globe, meaning that a stock price can rocket or crash overnight as the consequence of a single misplaced decimal point.

The cost of the mistake multiplies as it’s replicated, with each transaction exacerbating and compounding the original error, meaning that a fraction of a penny can turn into hundreds of thousands – or millions – of pounds by the next morning. 

There are similar dangers in content production. A piece placed in a respected outlet that contains errors generated by AI can eventually get enough mentions to become accepted as true – at least as far as other AIs writing about the same thing are concerned.

In June 2023, in an article titled ‘The Guardian’s approach to generative AI’, the newspaper’s editor-in-chief, Katharine Viner, and chief executive officer, Anna Bateson, wrote, “We will guard against the dangers of bias embedded within generative tools and their underlying training sets. If we wish to include significant elements generated by AI in a piece of work, we will only do so with clear evidence of a specific benefit, human oversight and the explicit permission of a senior editor. We will be open with our readers when we do this.”

A scientific example

Maintaining integrity is vital in any industry, but it’s particularly important in the publication of scientific, engineering and technical content. Researchers and writers have a responsibility to ensure any information they share is accurate, transparent and reliable, to build trust between scientists, engineers, technologists and the public. When mistakes are discovered that trust is put into question.

In August 2023, The New York Times reported that a prominent physics journal had retracted a materials science paper to investigate reports that one of the authors included fabricated and falsified data. While the investigations are still ongoing, some would argue that the damage is already done, with previous work by the same author – a professor in physics and mechanical engineering writing on superconductivity – questioned for its validity. 

The professor, Dr Ranga P Dias, maintains that any errors were accidentally introduced when collaborators on the paper used Adobe Illustrator software and its AI tool to create scientific charts. He claims that any inconsistencies were an unintentional consequence of using the software, rather than an effort to mislead.

Thankfully, this mistake has been spotted and is not subject to the usual fat-finger slip multiplication. But this was because it was published in a peer-reviewed journal, then analysed by a global community of scientists who rightly employ scientific methods to prove or disprove findings.

Is the same true of an article in a leading engineering or technology publication? Or in a non-peer-reviewed, but still respected, scientific publication? The answer is no, and this lack of critical faculty means that, in another context, a mistake like this could easily be accepted as fact. So how does this affect PR and marketing campaigns?

The ethics of using AI

Many businesses are using AI and open-source tools to streamline their operations, whether for data management, HR processes or creating assets. A recent report by the Chartered Institute of Public Relations (CIPR) titled ‘Artificial Intelligence (AI) tools and the impact on public relations practice’ found that there are around 5,800 technology tools available for research, planning and measurement.

While the report charts the impressive growth of generative AI and potential tools to support PR practices, it also highlights concerns. The ethical issues associated with AI, for example, include the question of whether practitioners need to declare when they use it, as the Guardian does, and the risk of accidentally spreading misinformation.

Effective communication is integral to show that a business is credible and trustworthy. PR professionals can really benefit from using AI – for supporting technical research, monitoring the media, reporting and content management – we just need to make sure that everything we say is true.

We’re all seeing the benefits of AI for marketing and discovering how tools can enhance our work and improve effectiveness. But it is still developing, so the resulting content requires fact-checking and, often, amending before it can be shared externally so we don’t spread misinformation. 

Failing to adequately review AI-generated content is ethically irresponsible and can be financially damaging and reputationally destructive. And you can’t really blame your errors on a fat-finger slip. 

Richard Stone is the managing director of technical marketing agency Stone Junction.

Richard Stone, a white man with grey hair and beard, wearing a jacket and t-shirt. He is stood in front of a brick wall.