The EU’s AI Act And Product Safety

This blog post was co-authored by Tegan Johnson, Solicitor Apprentice, London and was was originally published on Compliance & Risks 'In Practice Series' blog, December 2023.

The first draft of the Artificial Intelligence Act (the “AI Act”) was proposed by the EU Commission in April 2021 and it became the first substantive framework of its kind. It aims to provide a single framework for AI products and services used in the EU, ensuring products placed on the EU market are safe while allowing for innovation.

The Act will apply to systems used and products placed on the EU market – even where the providers are not in the EU – and adopts an risk-based approach akin to that commonly seen in Medical Device Regulations, with obligations proportional to the level of risk.

There is no single agreed definition of AI within academia and industry, so to define its scope and seek to regulate in such a comprehensive manner is a bold approach by the European Commission and akin to the ambitions it had when introducing the General Data Protection Regulation or GDPR in seeking to put in place a gold standard globally influential regulatory framework.

Currently the AI Act has entered the final stage of the legislative process with the EU Parliament and Member States thrashing out the details of the final wording, with certain aspects in particular subject to intense debate. Indeed, as recently as early December, the final trilogues were taking place and substantive amendments debated. It was announced that a political agreement had been reached on the AI Act but we’re awaiting a final draft form to be published, likely in the New Year. Where possible, we’ve included the reported outcome of those debates in the below analysis.

Once a final form is agreed and approved the AI Act will enter into law and, following a grace period of up to 2 years, its requirements will apply. The European Commission’s ambition is that a final draft would be agreed prior to European elections next year for fear of this causing significant delays.

While it may be subject to some changes, the fundamentals of this draft are so important and such a step change for actors in this space that it’s important for businesses deploying AI to understand the proposed requirements and their passage to entering law.

The Legal Framework

The key provisions of the law include:

Definition Of AI “Software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

Annex I includes: machine learning approaches, logic programming and search and optimisation methods.
Entities Within Scope
  • “Providers” placing on the market (or putting into service) in the EU AI systems, regardless of where they are based, and “users” of such systems within the EU.
  • “Product manufacturers” applying high-risk AI systems to products shall take the responsibility of the compliance of the AI system and have the same obligations imposed by the present Regulation on the “Provider”.
  • “Authorised representatives” established in the EU which have a written mandate from a provider of an AI system to carry out obligations outlined in the Act on its behalf.
  • “Importers” that place on the market AI systems in the name or trademark of an entity outside the EU.
  • “Distributors” that make available AI systems without affecting their properties.
Definitions Of Entities
  • Definition of “Providers”: Providers are defined as “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge”.
  • Definition of “Users”: Users are defined as “any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity”.
  • Definition of “Product manufacturer”: The manufacturer of a product, subject to the acts listed at Annex II, sold under its name and applying a high-risk AI system.
Risk Levels The Act follows a risk-based approach whereby AI systems are categorised based on level of risk, with obligations proportionate to the level of risk posed.

The risk levels comprise:
  1. Prohibited uses
  2. High risk
  3. Low or limited risk
  1. Prohibited use
    Some forms of AI are explicitly prohibited under the AI Act as they are deemed to be an unacceptable level of risk and/or unacceptable use purposes. See further detail below.
  2. High-Risk
    Some forms of AI are classed as high risk, where they:
    a.) are (1) intended to be used as a product covered by Union harmonisation legislation listed in Annex II (or a safety component of one), which covers products such as machinery, toys and medical devices, and (2) the product is required to undergo a third party conformity assessment with a view to placing on the market/putting into service; and/or
    b.) are listed specifically at Annex III (i.e. biometric ID, critical infrastructure safety, education and vocational training, employment, public benefits, law enforcement, border control and administration of justice).
  3. Low/Limited Risk
    Other forms of AI not falling within the prohibited use and high-risk categories will be deemed low or limited risk. These are subject to much less prescriptive obligations. Generally speaking, this category includes spam filters, chat bots, and other non-intrusive forms of product.
Prohibited Uses Some forms of AI are explicitly prohibited under the AI Act including:
  • Those that deploy subliminal techniques to “materially distort a person’s behaviour in a manner that causes or is likely to cause … physical or psychological harm”.
  • Those that exploit vulnerabilities of a specific group due to age or disability to cause physical or psychological harm.
  • Those deployed by or for public authorities to classify or evaluate people based on behaviour or personality which could lead to scores which have detrimental effect on treatment of people/groups which are unrelated and/or disproportionate.
  • Those involving real-time biometric ID in public spaces for law enforcement (unless strictly necessary for finding victims of crime or suspects of specified offences, or prevention of a specific substantial threat to life).
The recent debates also suggest that some items will be added to this list:
  • Databases based on bulk scraping of facial images.
  • Systems which categorise individuals based on sensitive personal traits such as race or political views.
  • Predictive policing software to predict likelihood of crime based on personal traits.
  • Emotion recognition in workplace/education environments, except where used for safety reasons.
High Risk AI System Obligations
  • Required to undergo conformity assessment including drawing up of technical documentation and declaration of conformity before placing on the market.
  • In certain limited use cases, third-party conformity assessment by a notified body is required, including in cases regarding biometric identification and categorisation system providers.
  • Risk management system to be implemented and maintained.
  • Testing throughout development and prior to placing on the market against defined metrics.
  • Using only data which meets quality criteria to train models (where applicable).
  • Design products with capabilities for automatic recording of event logs which ensure traceability of risk-related events as well as usage and input data.
  • Inclusion of instructions for use which identify the Provider and any risks of use.
  • Design for human oversight during their use.
  • Design to achieve an appropriate level of accuracy, robustness and cybersecurity throughout its lifecycle.
  • Products which continue to learn after their being placed on the market should be developed to address potential feedback loops and bias.
  • Registration to an EU database, which is to be developed by the European Commission.
  • Ensuring systems are designed to inform any natural person using it that they are using an AI system, and in the event they generate or manipulate image/audio content to create “deepfakes”, disclose the artificial generation/manipulation of the content.
  • Undertake post-market monitoring to analyse use and inputs and confirm compliance.
  • Report any serious incident which constitutes a breach to the relevant Market Surveillance Authority within 15 days.
  • In the event of non-conformity, take corrective action to bring it into conformity or withdraw or recall it.
  • Where there is no importer within the EU, the producer shall appoint an authorised representative within the EU to cooperate with national competent authorities.

In addition, recent debates seem to have introduced a requirement for bodies providing public services (such as healthcare or education) to conduct a “fundamental right impact assessment” before deploying high-risk AI systems.

Limited Risk AI System Obligations Limited risk systems are much less strictly regulated and there are fewer obligations for parties placing such systems on the market. The obligations that do apply include:

  • Ensuring they are designed to inform any natural person using it that they are using an AI system.
  • In the event they generate or manipulate image/audio content to create “deepfakes”, disclose the artificial generation/manipulation of the content.
Exclusions & Exemptions Military use exclusion:
A specific exclusion for AI systems developed for or used for military purposes exclusively (the latest debates suggest this will apply both the AI systems used by nations and external contractors), and those used for law enforcement and judicial enforcement where utilised by public authorities.

Notable proposed exemptions debated but not currently in available draft text
In addition, exemption conditions for products ordinarily falling within the high-risk classification have been debated with strong dispute amongst Member States. Proposals include exemptions for AI systems:
  • That do not materially influence the outcome of decision-making but performing a narrow procedural task, for example AI model that transforms unstructured data into structured data or classifies incoming documents into categories.
  • That review a previously completed human activity. I.e. merely providing an additional layer to human activity.
  • Intended to detect decision-making patterns or deviations from prior decision-making patterns to flag potential inconsistencies or anomalies, for instance, the grading pattern of a teacher.
  • Used to perform preparatory tasks only, to an assessment relevant to a critical use case. Examples include file-handling software.

The Consequences For Non-Conformity

It is Member States’ responsibility to set and decide the exact penalties for breach, as is the case with many EU product safety regimes. However, there are some specific examples and caps outlined in the draft, and we can draw conclusions from other regimes which may help understand the potential penalties for breach. The possible penalties include:

Procedures / Powers

Practical Meaning

Corrective Actions Competent Authorities generally have powers to rectify non-compliance, prevent placing on the market of a non-compliant product, and/or ordering withdrawal or recall. Authorities will have such powers in relation to AI systems under the General Product Safety Regulation as well as the draft provisions of the AI Act.

The AI Act does develop this granting Competent Authorities additional investigatory powers, amongst others, including a requirement for regulatory authorities to be granted access when necessary to the training data, source codes and other relevant information relating to the AI system to determine if a breach has taken place.
Monetary Penalty There are varying fines and scales of fine outlined in the draft Act, intended to vary in seriousness depending on the type and scale of non-conformity. The fine is capped at 30 million euros or 6% of global income (whichever is the higher) for set infringements, and 20 million euros or 4% of global income, whichever is the higher, for others – though it is expected that Member States will create their own detailed rules and scales in practice.
Criminal Sanctions Select regulations allow competent authorities or Member States to set sanctions, and these can include criminal sanctions and imprisonment for serious breaches. It remains to be seen if Member States will set such strict sanctions for breaches relating to AI products.
Civil Liability Where a company fails to comply with its obligations, they may become liable for any resulting damages.

In addition to this general power, the draft AI Liability Directive also being drawn up and progressed by the EU would be able to compel provision of evidence from providers of AI systems and reverse the burden of proof (if certain conditions are met) to assist claimants bringing claims under product safety legislation.

Checklist To Improve Compliance

  1. Assess the scope of the draft and its key provisions as currently drafted to determine whether your products are likely to fall within the regulation, and the implications of their potential risk level, particularly where AI systems or products applying such systems might fall within the High risk category.
  2. In particular, consider undertaking an internal review of practices, particularly with a view to ensuring that sufficient data is recorded and saved to be of use in case of future inspection or for compliance with the act.
  3. All businesses expecting to be affected by the law should look to monitor future amendments, commencement and other related laws which will change or supplement the framework provided by it.
  4. If you provide AI systems already, adding disclaimers regarding the risks and intended usage of the products could go a long way to assisting in compliance – especially for products categorised as lower risk.
  5. The draft reveals certain priorities: one being the accuracy of the data used. Data used for training and input should be accurate and free of bias to the extent possible. Grappling with the quality of data earlier rather than later may save a last minute rush for compliance.
  6. Certain industries (medical devices for example) will automatically be categorised as high risk. For these companies, the obligations are much more strenuous. Creating plans for mitigation and internal governance early, and enlisting specialist help, can help create a functioning system from the get go.

More from the blog ...