Like other disciplines advanced via the digital age Artificial Intelligence is gradually becoming deeply embedded in Public Relations. As the discourse regarding the prevalence of AI grows there is an increasing need to define sound ethical practices regarding it.
What we consider worthwhile of establishing an Artificial Intelligence API for (simply put, a platform of a set of tools that determines how software for a particular programme interact with each other to accomplish a given task) in most cases, is what is required and in demand – seeing the task in itself requiring duplication.
Artificial Intelligence tools, despite whoever uses them, must be ethical. The onus and burden of who is responsible if something goes wrong with their application in PR is gradually shifting from the doorstep of the manufacturer of the programme – alone – to that of the Media/PR Practitioner responsible for the AI tool’s usages.
Let’s, for instance, look at the controversy that ensued during the last US Open held at Flush Meadows between Serena Williams and Naomi Osaka. A follow up decision to determine an AI tool that could determine penalties in tennis matches along the lines of the decisions taken by the tennis court umpire, Ramos Carson, could generate controversial results; due the ethical nature of certain decisions surrounding the match.
Carson had, based on a thumbs up signal by Serena Williams’ coach, Patrick Mouratoglou, ruled that the Tennis star had been ‘coached’; an aspect of tennis that requires ‘communication’ of ‘instructions’ between coach and player.
The coach agreed he made the signal, but Serena argued that she wasn’t looking at her coach, had not seen the signal and wasn’t ‘coached’.
Umpire Ramos did not actually establish communication between player and coach; he just established a signal.
Since the coach didn’t shout there is doubt a thumbs up would communicate anything meaningful during a fast-paced tennis match.
It coincided with her turnaround point – when Serena was trying to rally back from behind against her opponent; any irritation or loss of concentration on her part at that point could hinder and break her resolve to do so.
An AI programme based on such a decision could detect signals similar to Coach Patrick Mouratoglou’s own. But applying the rule in the same manner as Umpire Carson would be an unethical application since anyone could deliberately ‘signal’ to scuttle a tennis player’s chances.
Here is a working guide on what could help when it comes to what ethically accepted in A.I and PR:
1: WILL IT STAND THE PR PANEL TEST: Or, would there be wild controversies like in the example stated.
2: IS IT FAIR TO ALL: Would the manner AI is used in each instance be considered fair?
3: WOULD IT ENDANGER LIVES: Are human lives safe and protected by the application of the AI programmes?
4: DOES IT GIVE UNFAIR ADVANTAGE TO ONE PARTY: Is another party put at a disadvantage as a result of the A.I programme)?
5: CAN IT BE USED IN A VICE-VERSA MANNER (THE VICE-VERSA RULE): Would the user of an AI programme want it to be applied in the same manner regarding him or her?
There is a need to understand that PR in itself provides guidance as to what is right and what can/ keep a client or applicant on the right path and out of trouble. Many AI programmes have been pre-prepared before their application/usage by PR Practitioners – even where it comes to them being applied in PR instances.
Hence, the need the need for the PR Practitioners to where possible ‘cross-borders’ and ensure AI programmes are PR/AI complaint – right from the onset.
However, despite the right safety measures in any AI programme, as researchers Nick Bostrom of the Future of Humanity Instituteand Eliezer Yudkowsky of Machine Intelligence Research Institute put it in their paper: The Ethics of Artificial Intelligence:
‘The local, specific behaviour of the A.I (programme) may not be predictable apart from its safety, even if the programmers do everything right’… and ‘ethical cognition itself must be taken as a subject matter of engineering.’
Or another statement accredited to the authors: ‘Verifying the safety of the system becomes a greater challenge because we must verify what the system is trying to do, rather than being able to verify the system’s safe behaviour in all operating contexts.’
This suggests that The PR Practitioner, as a matter of necessity, may need to be part of and get involved in the verification and where required – design process – of AI programmes that will affect PR, to ensure they are ethically complaint.
Exoneration on the grounds that the programme was not designed by and hence didn’t have the input of the PR Consultant will no longer serve as a tenable excuse.
This is the increasing dilemma the PR Practitioner is faced as AI in PR becomes prevalent.
Is it really worth it?
Considering the positive impact AI has on PR the answer is in the affirmative!
Anthony Olabode Ayodele Chart.PR is CEO of ICP Public Relations