Sign In
PUBLIC RELATIONS
Tuesday 27th April 2021

The EU’s new AI regulation puts trust and transparency to the fore. Here’s what it says

The European Commission's (disclaimer: former employer) new legislative package on artificial intelligence has drawn plenty of attention for banning high risk uses such as live facial recognition and social credit scoring.

That transparency will be mandatory has generated less ink. Yet it is a critical ingredient of the Commission's risk-based, trust-centric approach and, as such, has important potential implications for every organisation designing and deploying AI systems.

To date, notions of AI transparency have been largely limited to so-called explainability (or ‘XAI’), a technical solution that enables technologists and engineers to interpret, understand and explain their models.

The Commission’s proposal promises to give risk managers, reputation managers, communicators and others accustomed to dealing with transparency and openness issues a seat at the AI decision-making table.

Here is a summary of what the regulation proposes transparency-wise:

Transparency obligations will apply for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’).

When persons interact with an AI system or their emotions or characteristics are recognised through automated means, people must be informed of that circumstance.

If an AI system is used to generate or manipulate image, audio or video content that appreciably resembles authentic content, there should be an obligation to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes (law enforcement, freedom of expression). This allows persons to make informed choices or step back from a given situation.

And here’s what it proposes for each risk level:

Unacceptable Risk

Defined as ‘AI systems whose use is considered unacceptable as contravening Union values, for instance by violating fundamental rights’. Examples include social credit scoring, real-time remote biometric identification systems (with ‘certain limited exceptions’), and the use of subliminal techniques to target children, the elderly, disabled and other vulnerable groups.

  • Transparency requirements for biometric exceptions are not defined. 

High Risk

Defined as: biometric identification/categorisation; management and operation of critical infrastructure; education and vocational training; employment, workers management and access to self-employment; ‘essential’ public and private services; law enforcement; migration, asylum and border control management; administration of justice and democratic processes.

  • Should be designed and developed to ensure users can interpret the system’s output
  • Accompanied by relevant, accessible, concise, correct and clear user instructions
  • Include the identity and contact details of the provider and its authorised representative
  • Sets out the purpose, accuracy, robustness and limitations of the system, as well as known risks
  • ‘When appropriate’, to specify the input/training/validation/testing data
  • Sets out the system’s performance, nature/degree of human oversight, expected lifetime and software updates.

Limited Risk

Defined as AI systems such as chatbots.

  • Users must be aware they are interacting with a machine (per the summary above).

Minimal Risk

Defined as AI-enabled video games, spam filter and other applications.

  • No transparency requirements.

In addition, the regulation sets out a number of legal notification requirements, including pre-launch conformity assessments and the obligation to communicate risks to high-risk systems as soon as they are known to the relevant authorities.

If GPDR is anything to go by, the Commission’s proposed AI regulation is a first salvo in what is likely to be a long, drawn-out battle before it becomes law. The current transparency requirements may change.

Should it have anything like the impact of GDPR - intentional and unintentional, inside and outside the EU - it will be a mêlée communicators across the world would be well advised to pay attention to.

The AIAAIC repository is a free, open resource exploring the limitations, consequences and risks of AI, algorithms and automation. Comprising 8,000+ data points, the repository details 600+ incidents and controversies driven by and relating to AI, algorithms and automation since 2012. CIPR members may use, copy, redistribute and adapt the repository with colleagues and others.