EU AI Act: Key Principles and Considerations
Tuesday 4th June 2024
The European Union’s approach to Artificial Intelligence (AI) focuses on safeguarding the rights of citizens. While AI offers societal benefits, its evolution has raised concerns about safety, transparency, and overall impact on humans and the environment. In response, the EU has developed and recently approved the first comprehensive regulatory framework for AI systems: the EU Artificial Intelligence Act. This legislation applies to AI developers, deployers and users operating within the EU.
Purpose of the Act
As AI systems become more prevalent, the EU recognised the need to address risks that could lead to undesirable outcomes. The Act aims to prohibit practices with unacceptable consequences and establish rules for organisations using AI systems to mitigate adverse effects on individuals.
What are “AI systems” under the Act?
The Act defines an “AI system” as:
“a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Businesses operating in the EU must assess whether their AI systems fall under this definition, triggering the Act’s applicability.
Main principles
The Act includes several fundamental principles:
- Banning Certain High-Risk AI Applications: specific AI applications posing an unacceptable risk to people’s rights, safety or livelihoods are completely banned. These include social scoring systems by governments and emotional recognition applications.
- Risk-Based Categorisation: the Act categorises AI systems based on the level of risk posed to individual rights:
- High-Risk Systems: includes AI used for critical infrastructure, safety components of products, employment decisions and automated examination systems in government departments.
- Limited-Risk Systems: includes chatbots, AI-generated publications and audio/video content, where transparency about AI use is required.
- Minimal or No Risk Systems: includes AI-enabled video games and spam filters, covering most AI systems currently used.
Territorial and Extra-Territorial Scope
Like the GDPR, the EU AI Act has an extraterritorial scope, affecting certain organisations outside the EU. For example, a UK-based provider offering an AI system in the EU market will be subject to the Act if it has any link to EU individuals.
The Act will be gradually implemented over three years, giving organisations time to comply.
Considerations for Organisations Developing or Adopting AI Systems
Organisations to which the Act applies should:
- Review their AI solutions to determine their risk category. High-risk systems require a conformity assessment and CE markings before being placed on the EU market.
- Conduct appropriate risk assessments to create an audit trail considering individual rights.
- Consider related legal requirements such as privacy, data protection and intellectual property laws that interact with the Act.
- Establish written policies, procedures and governance structures for the development and use of AI.
If you have any questions about the EU Artificial Intelligence Act or AI usage, please contact one of our AI and technology law experts.
This article is part one in a series about AI.