What the EU AI Act means for communicators.
Earlier this year, recruitment AI firm HireVue declared its ‘commitment to’ and ‘leadership position’ on the transparent and ethical use of AI in hiring, by publishing the results of an external audit into one of its early career assessment algorithms. It also stated it would stop using facial analysis for new recruitment assessments.
The audit concluded that the company’s assessments ‘work as advertised with regard to fairness and bias’. However, as Brookings Institution fellow Alex Engler pointed out, HireVue was being disingenuous.
The company mispresented the findings of the audit in such a way that it appeared to extend to its other assessment systems. And it only stopped incorporating facial analysis after years of criticism about the accuracy, fairness, and privacy of its systems, crowned by an EPIC legal complaint to the FTC.
Compounding matters, HireVue restricted (and continues to restrict) access to and use of the audit, insisting those downloading it may not ‘use, copy, excerpt, reproduce, distribute, display, publish, etc. the contents of th[e] report in whole, or in part, for any purpose’.
Despite these shortcomings, HireVue is arguably more transparent about its systems than many other organisations designing or deploying AI. Few companies or governments voluntarily submit their systems to external audits or publish the results.
Furthermore, HireVue does make an effort to describe how its systems work to job candidates and others, albeit in a rudimentary manner. According to Capgemini, research indicates that 41% of organisations fail to inform users how AI-driven decisions might affect them – a problem the consultancy finds is getting worse.
AI transparency pretexts and pitfalls
Given widespread public concerns about disinformation, deepfakes, bias and other uses and abuses of AI and algorithms, organisations of all types might reasonably be expected to make their systems understandable to end users and accessible to researchers, auditors, regulators, and other trustworthy intermediaries.
Mostly, they choose not to. To protect their trade secrets and IP. To reduce the liability of bias, privacy and other legal risks. And to minimise the gaming and manipulation of their systems by customers and bad actors.
And then there’s the fact that some of these systems are so large and complicated that their designers don’t fully understand how they work, a challenge that so-called explainability (or ‘XAI’) software is yet to resolve adequately.
These are reasonable concerns and challenges.
However, there are also plenty of less palatable reasons why organisations may want to keep their AI systems in the dark or restrict external scrutiny, including the ease with which it is possible to obfuscate their primary objective, and underplay the scope for misuse.
And then there’s the paucity and, where they exist, the inconsistency of AI audit standards and dedicated legislation and enforcement.
Why be open when you don’t need to be?
Mandatory transparency and openness
The EU’s draft AI Act aims to wean organisations off the opium of algorithmic opacity by insisting on transparency and openness as part of a broader risk-based, trust-centric approach to regulating artificial intelligence.
The proposed risk levels are categorised as unacceptable, high, limited and minimal. Minimal-risk systems, such as video gaming and spam filters, have no transparency requirement.
Chatbots, deepfakes, and other limited-risk systems, including those where users’ emotions or characteristics can be recognised using a form of automation, need only make it clear to users that they are interacting with a machine.
The requirements are significantly more onerous (see box below) where AI is used for: biometric identification/categorisation; the management and operation of critical infrastructure; education and vocational training; employment and management; the provision of ‘essential’ public and private services; law enforcement; migration, asylum and border control management; or the administration of justice and democratic processes.
High-risk AI transparency requirements
- Should be designed and developed to ensure users can interpret the system’s output
- Should be accompanied by relevant, accessible, concise, correct, and clear user instructions
- Should include the identity and contact details of the provider and its authorised representative
- Sets out the purpose, accuracy, robustness, and limitations of the system, as well as known risks
- "When appropriate", should specify the input/training/validation/testing data
- Sets out the system’s performance, nature/degree of human oversight, expected lifetime and software updates.

Challenges, risks and opportunities for communicators
Having been consigned to a largely promotional role, or to mopping up after a system goes astray, mandatory transparency provides communicators, reputation managers, marketers and others with a golden opportunity to sit at the AI decision-making table.
Given the power and impacts of AI, a holistic, long-term strategic approach is advisable. The outside-in perspective communicators provide can help leadership and AI teams understand user and stakeholder needs, expectations and behaviours, and crystallise reputational and other risks.
These risks should be assessed - quantitively, where possible - and prioritised, and used to prepare playbooks and incident response and crisis plans, including holding statements. Messaging, communications-related policies and protocols, and teamwork should be tested and updated regularly.
With AI systems operating more in the open, communicators are also in a good position to help develop employee and customer education campaigns as well as to ensure that technical jargon is minimised and formal disclosures and day-to-day communications are clear, succinct and understandable to everyone.
Getting Ready
The EU’s proposed AI Act is the first salvo in what is likely to be a long, drawn-out battle before it becomes law. The current transparency requirements may change.
Like the General Data Protection Regulation (GDPR), the AI Act has an extra-territorial dimension, meaning it may be applicable to entities drawing on data produced, hosted or deployed in the EU but operating elsewhere.
If the AI Act has anything like the impact of GDPR - intended and unintended - communicators across the world would do well to pay close attention to its evolution and start getting ready.
Charlie Pownall is an independent reputation and communications advisor. An author, RSA Fellow and former EU official, he is founder of AI transparency research and advocacy initiative AIAAIC.