The AI Act is coming into force soon, and many businesses will need to develop their understanding of the types of applications that are prohibited, what the Act defines as โhigh-riskโ activity and what systems – if any – are exempt from the rules. An expert team of data protection specialists at DPO Centre have shared their exploration of the Actโs approach to the classification of AI systems, preparing businesses for the coming developments…
What is The AI Act?
If youโre not already familiar, the AI Act brings forward a regulatory and legal framework for the deployment, development, and use of AI systems within the EU. The legislation categorises AI systems according to their potential impact on safety, human rights, and societal well-being. Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment.
Classification of AI systems
As weโve established, the AI Act aims to balance innovation with regulation to prevent harm to health, safety, and fundamental human rights. To do so, it takes a risk-based approach to the classification of AI systems. But whatโs the benefit of this approach?
By assessing risk, the legislation recognises that not all AI systems pose the same level of threat, and that varying levels of control and oversight are required. As such, AI systems are categorised into different risk levels based on their potential impact, with the burden of compliance increasing proportionate to the risk.
There are the three main categories of classification. AI applications falling into the prohibited systems category are banned entirely, due to the unacceptable potential for negative consequences. High-risk systems – those deemed to have a significant impact on peopleโs safety, wellbeing and rights – are allowed, but are subject to stricter requirements. Low-risk systems pose minimal dangers, therefore have fewer compliance obligations.
Prohibited AI applications
The prohibitions on unacceptable risk AI systems will come into force 6 months after the AI Act is published in the Official Journal of the EU. The European Commission will regularly review the list of prohibited AI applications, with the first review scheduled 12 months after the AI Act enters into force.
The table below details the types of AI practices that fall under the prohibited category. These are the techniques and approaches with unacceptable risks to health and safety or fundamental human rights:
What is โhighโriskโ activity?
Most of the AI Act addresses the regulation of high-risk AI systems, and these are explained as three distinct categories:
When the AI system is a certain type of product itself
This refers to AI systems that are not a component or feature of a larger product, but rather the product itself. Some examples of such systems would include medical devices, heavy industrial machinery, cars, and toys. A more comprehensive list features in Annex I of the AI Act. Many of these types of products are already regulated by certain EU harmonisation laws.
For Businesses that develop or deploy AI systems in a sector with tightly managed safety legislation, there is a high probability the system will be covered here. As these products are already subject to strict safety regulations, they are automatically considered a high-risk category under the AI Act.
When the AI system is a safety component of a certain type of product
In short, this is where an AI system isnโt a standalone product, but performs safety-related functions within a product, such as monitoring, controlling, or the management of such safety features. Many of these systems are related to products listed in Annex I of the AI Act, such as industrial machinery, lifts, medical devices, motor vehicles and so on.
When the AI meets the description of a defined list of โhigh-riskโ systems
There are certain AI systems not listed in Annex I that are also considered high risk. This defined list includes systems that would significantly impact peopleโs opportunities and potentially cause systemic bias against certain groups.
These systems fall into 8 broad areas:
Biometrics
Certain biometric processing is entirely prohibited, as detailed above, but all other biometric processing is classified as high risk (with the exception of ID verification of an individual for cybersecurity purposes, like Windows Hello).
Critical infrastructure
AI systems used as safety components in managing critical digital infrastructure (similar to the list in Annex I) and AI systems used in the supply of water, gas, or electricity.
Education
Any AI system determining admissions or evaluating learning outcomes are high risk due to the potential impact on lives. For example, some of these systems may present the risk of perpetuating the historic discrimination of women and ethnic minorities.
Employment & management
Any AI system used for recruitment, job application analysis, and candidate evaluation are considered high risk, once again due to their potential impact on human lives. Similarly, AI tools used for performance monitoring, work relationships, or termination of employment are also high risk.
Access to essential services
Systems determining access to essential services such as public benefits like unemployment, disability and healthcare, or private benefits such as credit scoring systems.
Law enforcement
Certain tasks are considered high risk, including using lie detectors or similar biometric tools used for testimony assessment, and systems used to assess the likelihood of an individual reoffending.
Immigration
Systems used to assess the security risk of migrants entering the EU, or to process and evaluate asylum claims. AI systems used to verify ID documents are exempt from this.
Administration of justice and democratic processes
This includes AI systems used in legal research or interpreting the law, such as legal databases used by lawyers and judges. Also, systems that could influence voting, like those used to target political ads.
Exemptions for high-risk and prohibited AI systems
The AI Act exempts certain AI systems that would otherwise be considered high risk or prohibited. Prohibited system exemptions are notably for research and national security.
High-risk system exemptions mainly fall under these criteria:
-
The AI performs only a narrow procedural task
-
The AI improves on the result of a previously completed human activity
-
The AI is meant to detect or monitor bias or other patterns in decision-making but doesnโt replace human decision-making and is subject to human review
-
The AI is used for a preparatory task relevant to the assessment of an otherwise high-risk task i.e. you can use AI to help you assess your use case
How will the AI Act affect organisations using high-risk AI systems?
High-risk AI systems require thorough risk and security assessments and may need EU registration and third-party evaluation. There are also substantial transparency obligations, and users must be clearly informed about how the AI system is deployed and how it functions. Organisations should develop and maintain compliance frameworks to ensure adherence to the AI Actโs requirements, including making regular audits and keeping detailed documentation to demonstrate compliance.