The EU AI Act, 2024

The EU AI Act is one of the early binding legislative instruments to govern the AI lifecycle. As a legislative instrument, it establishes what some have come to call the “foundation” of AI legislation. The EU AI Act classifies AI according to the risk it poses and regulates AI from a risk-centric perspective. It prohibits unacceptable risk altogether, and fundamentally governs high-risk AI, while a smaller part of it affects limited risk AI systems.    

Under the EU AI Act, unacceptable risk is prohibited altogether. AI systems that:

  • Deploy subliminal, manipulative, or deceptive techniques to distort behaviour or impair informed decision-making, causing significant harm; or

  • Exploit vulnerabilities pertaining to age, disability, or socioeconomic circumstances to distort behaviour, causing significant harm; or

  • Use biometric categorisation that infer sensitive attributes, except in the case of labelling or filtering lawfully acquired biometric datasets or when law enforcement categorises biometric data; or,

  • Carry out social scoring, resulting in the detrimental or unfavourable treatment of such people; or assess the risk of an individual committing criminal offences based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity; or,

  • Compiling facial recognition databases by the untargeted scraping of facial images from the Internet or CCTV footage; or,

  • Inferring emotions in workplaces or educational institutions, except for medical or safety reasons; or,

  • Real-time remote biometric identification in publicly accessible spaces for law enforcement, except while searching for missing persons, abducted victims, and people who have been targeted by human trafficking or sexual exploitation, or, to prevent substantial and imminent threat to life or foreseeable terrorist attack or identifying suspects in serious crimes.

Remote AI-enabled biometric identification is allowed only when not using the tool would cause considerable harm. Such use must account for the rights and freedoms of affected people. Before deployment, the police must complete an impact assessment for fundamental rights and register the system in the EU database, though in cases of urgency, deployment can begin without registration as long as it is registered belatedly without undue delay.

AI that poses limited risk is subject to lighter transparency obligations. In these cases, developers and deployers of AI must ensure that end users are aware that they are interacting with AI. However, AI that produces minimal risk is entirely unregulated (which includes a majority of AI applications that are currently available in the EU single market, such as AI enabled video games and spam filters.

The EU AI Act predominantly imposes obligations on providers and developers of high-risk AI systems, which also covers those who intend to deploy high-risk AI systems in the EU, regardless of whether they themselves are within or outside of the EU. This means that third-country providers are liable if the AI system is a high-risk one. Chapter III establishes the responsibility levels for high-risk AI systems. It defines high-risk AI systems as those that are:

  • Used as a safety component or product covered by EU laws in Annex I of the act, and are required to undergo third-party conformity assessment under the laws listed in Annex I of the act, or,

  • Listed under Annex III, except if the AI system performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns or deviations from prior decision-making patterns, and is not meant to replace or influence the prior human review or performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III, or

  • AI systems that profile individuals.

If the providers believe that their AI system is not a high-risk one, they must document their assessment before putting the AI tool in the market or in service. Typically, high-risk AI system use cases include:

  • non-banned biometrics

  • critical infrastructure

  • educational and vocational training

  • employment and worker management and access to self-employment

  • access to and enjoyment of essential public and private services

  • law enforcement

  • migration, asylum, and border control management, and

  • administration of justice and democratic processes.

High-risk AI providers must establish a risk management system throughout the lifecycle of the AI system, conduct data governance, draw up technical documentation to demonstrate compliance and enable assessment of such compliance, design the AI system for record-keeping, provide instructions for use to downstream deployers, design the system to allow human oversight and to achieve high levels of accuracy, robustness, and cybersecurity, and establish a quality management system to ensure compliance.

General Purpose AI is used to mean an AI model, including when trained on a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how the model is placed in the market and that can be integrated into various downstream systems and applications. It does not, however, cover AI models used before release on the market for R&D and prototyping. Under the EU AI Act, all General Purpose AI model providers must provide technical documentation, instructions for use, comply with copyright laws, and publish a summary of all the data used for training the AI model. If the model presents a systemic risk, the providers must conduct model evaluations, adversarial testing, and track and report serious incidents while also taking care to embed appropriate cybersecurity provisions. All free and open-license GPAI model providers are only required to comply with copyright laws and publish training data summaries unless doing so presents a systemic risk.

Next
Next

ACHPR Resolution 473