Join CIPR
Silhouette of a person in a hoody in front of a computer screen. It is photographed from the side, against landscape and sky outside a window.
SanderStock / iStock
TECHNOLOGY
Tuesday 20th June 2023

How AI can aid sports PRs’ fight against online toxicity

With online sports forums becoming ever more toxic, can PRs look to AI-based solutions to help mitigate hatred and risk?

Managing and promoting the public image and reputation of a team, organisation, or athlete is the primary role of a sports PR professional. And, similar to other industries, effectively handling crisis situations is a vital part of this work.

Now, more than ever, these crises are taking place across sports’ vast and highly influential online space. With the anonymity that online platforms offer, user are becoming ever more emboldened to attack athletes, influencers and sports personalities, with everything from offensive language to targeted harassment and threats. Over the past few years, we’ve witnessed hateful and toxic comments, whether racist, misogynistic, homophobic, or transphobic, escalating at a rapid pace. 

Coupled with online spam, and a growing scam and fraud problem, and we have an exceptionally toxic environment posing a business-critical risk; not just to brand reputation, but to marketing assets and fan engagement too.

For PRs, the need to gain control of negative narratives is an essential part of mitigating this risk, pushing online content moderation high up the priority list. 

Football: a hotbed for hate

As arguably the world’s most popular sport, with millions of fans spanning different countries and cultures, football discussions, in particular, can easily become heated. And although it’s a problem affecting sports across the board, football has become a hotbed for online hate. 

One of the most high-profile cases of online hate in sport, for example, occurred during the Euro 2020 football tournament, where several players were subjected to racist abuse on social media after missing penalties in the final match, sparking widespread condemnation and calls for action from football authorities and governments. Two years on, and Ofcom’s 2022 report showed that three quarters of Premier League players have been subjected to vile online abuse, highlighting the need for more action. 

Putting revenue streams at risk - AI’s helping hand

Not only can online abuse have serious mental health implications for those targeted, which can negatively impact performance on the field, but poor or zero moderation can also put brands’ revenue streams at risk, including merchandising, ticket sales, TV rights and sponsorships. 

Damage from digital piracy and illegal match ticket sales, for example, poses a huge threat to the Premier League. As well as a massive financial loss, fans enduring poor quality viewing streams and exposure to spam and scam links can also degrade the competition's image. 

While it’s obvious that moderating toxic content should be a priority, a content moderator’s role can be both soul-destroying and nigh on impossible. Each message takes approximately 10 seconds to read, analyse and moderate; while checking malicious links is also highly time-consuming and can cause security breaches. Other challenges include the fact that half of the world's population speaks 23 different languages, each social page is another environment to protect and following moderation rules at scale leads to a lack of consistency. Online spam, perceived as user-generated content, must also be reviewed; not an easy job with constantly new methods of bypassing platforms’ algorithms. 

The message is clear: human moderators need support and Artificial intelligence (AI) offers just that. In fact, the only way to automatically identify and block the toxic content on social networks in real time is a contextual and instantaneous moderation solution that unites the speed of a machine with human accuracy. 

An intelligent moderation solution can identify all forms of online toxicity, offering 24/7 protection for online community spaces across multiple social media platforms. By replicating human moderation, AI can block toxic comments at scale, based on context severity and an automated contextual analysis. 

Protecting free speech in sports 

Sports communications professionals, however, must be careful not to default to these technologies without considering the wider impact on the brand. Freedom of expression is a fundamental part of sports. This may include journalists reporting on controversial topics, or fans expressing their support or criticism of teams and players in a non-abusive manner, both of which can enhance the overall fan experience and make sport more engaging. Athletes too, when given the right to express themselves freely on typically large platforms, can raise awareness of important issues such as racism, sexism, and homophobia, and inspire positive change.

As such, using a real time moderation solution for brand communications channels that detects and moderates toxic content without hindering freedom of expression is key. AI solutions must also be able to take the linguistic and relational context of comments into account.

With AI, you can also gain a realistic overview of your online community health by automatically classifying all user-generated content from multiple languages across 30+ classification and custom rules. This means you can easily monitor the voice of your fans and use that intelligence to help steer your communication decisions – again, enhancing your fan experience. 

A shared goal

Tackling online toxicity requires a collective effort from everyone involved in the sports community, including PRs. Let’s protect our games, our brand values, the fans’ experience, the players’ morale and the financial stability of the industry. While there’s a need for education and awareness-raising, ultimately we must redefine the way sport clubs analyse and protect their online community spaces at scale. And this can be hugely aided with the help of AI.

Matthieu Boutard is president and co-founder of Bodyguard.ai.