The European Union (EU) Artificial Intelligence Act is a key landmark legislation that represents one of the first laws to go into effect regarding the application and use of artificial intelligence (AI) technology. This historic regulatory framework was created to govern the use, development, and deployment of AI systems within the EU and establish an operational cyber framework for businesses.

As one of the first pieces of legislation of its kind, the EU AI Act aims to ensure that AI technologies are developed and used in a safe and ethical manner that protects the fundamental rights and freedoms of individuals. It also introduces a comprehensive set of rules that apply to all AI systems that can have an impact on people in the EU.

This blog will explore why the EU AI Act is significant, the key provisions of the act, and what is next to come regarding the use of AI and other AI tools.

Find out how UpGuard safely utilizes AI to help companies manage their risks >

EU ArtificiaI Intelligence Act History

The EU Artificial Intelligence Act was first proposed by the European Commission in April 2021, stemming from an initiative to ensure AI technology was deployed safely across all uses and functions. Following the rapid adoption of generative AI and large language models (LLM) like ChatGPT and Bard, the act addresses growing concerns around privacy and accountability in AI applications.

World leaders from the US, UK, China, and more, all strongly advocated for AI regulation, and even the CEO of ChatGPT, Sam Altman, believed that regulation was necessary. It’s important to note that while AI regulation is at the forefront of discussions, the goal is to ensure that AI technology can be managed properly but not restricted in order to encourage further AI innovation.

In March 2024, policymakers from 27 countries in the European Parliament voted heavily in favor of the AI Act, and it is expected to take effect within six months after the Act enters the lawbooks. By 2026, the European Commission expects the AI Act will be in full effect with complete regulations and laws.

Why is the EU Artificial Intelligence Act significant?

The EU Artificial Intelligence Act is significant because it is the first-ever comprehensive legal framework on AI, and it has the potential to set the global standard for AI regulation. As AI technology becomes increasingly integrated into every aspect of daily life, from healthcare and education to employment and law enforcement, the potential cybersecurity risks associated with the technology increase tenfold.

These include issues of data privacy, security, fairness, and accountability. The main goal of the EU AI Act is to address these challenges, establishing a framework that balances AI innovation with protecting individual rights. By doing so, we can limit the risks and prevent abuse of such technologies.

At its core, the potential for AI is endless, which is why the EU AI Act aims to ensure the safety of its use, which ultimately protects the safety of the people, the market, and the world. The Act is structured in a future-proof manner, adopting an approach that categorizes AI by its impact, risk level, and scope.

Key provisions of the EU Artificial Intelligence Act

The EU AI Act will use a risk-based approach to defining its framework. By using this approach along with other established rules, the EU can build trust in the use of AI technology, drive AI innovation, and protect individuals from exploitation. It is a completely new method of identifying and mitigating risks, one that existing legislation, such as the EU General Data Protection Regulation (GDPR) and Digital Operational Resilience Act (DORA), could not fully address.

The main provisions that are listed in the EU AI Act are as follows:

  • Establishing a risk-based approach to AI regulation
  • Setting requirements for AI systems
  • Regulation surrounding general purpose AI models (GPAI)
  • Determining which AI systems are banned and prohibited
  • Allowing AI developers to train their AI technology before placing it on the market
  • Creating new regulating bodies to enforce AI regulation
  • Penalties for non-compliance

A risk-based approach to regulating AI

The risk-based approach to regulating AI enables a flexible response to the rapid evolution of AI technologies. By categorizing AI systems based on their level of risk, the AI Act ensures that stricter controls are applied to applications considered a higher risk, while lower-risk applications are subject to less stringent requirements.

The Act classifies AI risk into four different categories:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal or no risk

1. Unacceptable risk

AI systems considered an unacceptable risk are those that clearly threaten people's safety, livelihoods, and fundamental rights. Systems classified as unacceptable risks are fully banned and restricted by the EU. Examples of unacceptable systems per the political agreement by the EU include:

  • “Biometric categorisation systems that use sensitive characteristics (e.g., political, religious, sexual orientation, race)”
  • “Social scoring systems based on social behaviour or personal characteristics”
  • “Emotion recognition in the workplace or educational institutions”
  • “AI systems that manipulate human behaviour to circumvent their free will”
  • “Scraping of facial images from the Internet or CCTV footage to create facial recognition databases”
  • “AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)”
  • “Certain applications of predictive policing”

There are exceptions to this rule, particularly in cases of law enforcement, in which functions such as remote biometric identification or facial recognition using AI technology can be allowed. However, any such use must be approved first and reported on following its use. The use of AI systems to identify criminals or criminal activity comes with a strict set of rules for persons suspected of committing serious incidents or crimes, including but not limited to sex trafficking, sexual exploitation, abductions, terrorism, murder, robbery, and more.

2. High risk

High-risk AI systems apply to sectors such as critical infrastructure, education, employment, essential private and public services, law enforcement, immigration, and the justice system. High-risk systems are subject to strict obligations before they can be placed on the market, including:

  • Data governance
  • Detailed risk assessments and mitigation processes
  • Human oversight
  • Transparency of use
  • Documentation of the system on its purpose
  • Robustness of security

These systems are subject to strict compliance requirements, including accurate data management, transparency, robustness, and security measures. High-risk applications must also complete a conformity assessment before deployment to ensure they meet the AI Act’s standards.

As such, one major provision detailed in the Act is that AI developers are allowed to test and train their AI models before going to market. This allows developers to conduct real-world testing within the regulatory sandboxes to ensure their systems adhere to the high-risk system standards.

3. Limited risk

AI systems classified as limited risk must adhere to specific transparency obligations. More specifically, the system must inform users when they are interacting with an AI system. An example would include the use of chatbots, so users are aware of their interaction with AI before they make any decisions or continue the interaction. Additionally, AI content must be made clear so as not to mislead the public of its origin. AI content also extends to audio and video productions, such as deepfakes that may attempt to trick users.

4. Minimal or no risk

Most AI systems fall into the category of minimal or no risk. Minimal risk systems can operate with few regulatory constraints, reflecting the EU's intent to encourage innovation and AI advancement. Although these systems are subject to few regulations, developers are encouraged and recommended to adhere to best practices and ethical standards to ensure trustworthiness and user safety.

Scope of the EU AI Act

The scope of the EU AI Act applies to all companies operating within the EU or dealing with data from EU citizens. It applies to AI systems regardless of where the provider is based as long as the system is used in the EU. This wide-reaching scope means that companies outside the EU must also comply if their systems impact EU citizens.

The main exceptions to the EU AI Act are AI systems used specifically for military or defence purposes or systems used strictly for research and scientific study.

When does the EU AI Act go into effect?

As of March 2024, the EU Artificial Intelligence Act was successfully voted on and approved by the EU Parliament. The text is expected to be finalised and entered into lawbooks by April 2024, with its enforcement taking effect beginning six months after the fact.

The ban on systems identified as unacceptable risk will take effect first, six months after entry into force, roughly around October 2024.

GPAI systems have 12 months to abide by the requirements of the Act or 24 months if the product is already on the market.

The EU expects that the EU AI Act will be fully in effect and enforceable by April 2026, pending any new revisions or amendments that may come at a future date.

Who enforces the EU AI Act?

The European AI Office, which was established in February 2024 under the European Commission, is in charge of enforcing the EU AI Act. Additional enforcement of the Act will be carried out by authorities designated by each member state.

The Act also details a plan for oversight and enforcement across the EU, including establishing a European Artificial Intelligence Board. The European AI Board will facilitate cooperation between member states and ensure a unified approach to the Act's application and enforcement across the EU.

Penalties for non-compliance

Non-compliance with the EU Artificial Intelligence Act can result in significant financial penalties, with the possibility of legal action if necessary. Fines for violations of the EU AI Act will depend on the type of AI system, the size of the company, and the severity of the violation:

  • €7.5 million or 1.5% of a company's total worldwide annual turnover (whichever is higher) for incorrect, incomplete, or misleading information
  • €15 million euros or 3% of a company's total worldwide annual turnover (whichever is higher) for breaches of obligations listed in the Act
  • €35 million euros or 7% of a company's total worldwide annual turnover (whichever is higher) for violations of prohibited AI applications

General-purpose AI (GPAI) regulations

General-purpose AI (GPAI) systems were a major point of contention during the EU Artificial Intelligence Act deliberations. The Act defines GPAI systems as AI models with the ability to perform a wide range of tasks without being specifically designed for that task. However, without proper management, GPAI can pose a few risks, such as making autonomous decisions beyond human comprehension, disregarding ethics and moral values, and more.

The Act recognizes the potential of GPAI, as well as the risks it brings, given its broad applicability and potential impact on society. As such, GPAI systems are classified as “systemic risk,” especially if it has a high-risk impact with computational power exceeding 10^25 FLOPS (floating point operations).

In order for GPAI systems to be placed on the market, GPAI providers must notify the EU AI Office as soon as they meet the following requirements:

  • Clear and transparent documentation of the system’s training and testing process
  • Does not infringe on any copyright laws
  • Can provide AI model evaluations on its security, potential risks, and incident monitoring processes

To regulate GPAI, the EU Parliament and Council proposed a framework that can adapt to the rapid development of GPAI. This includes monitoring the evolution of GPAI systems, assessing their impact, and classifying them according to the risk-based approach if they are used in high-risk applications. The Act aims to ensure that as GPAI systems become more integrated into society, they do so in a way that protects EU citizens while promoting innovation at the same time.

How companies can achieve compliance with the EU Artificial Intelligence Act

Achieving compliance with the EU Artificial Intelligence Act requires companies to take a proactive and thorough approach to understand and implement the necessary measures based on the classification of their AI systems. By taking the following steps, companies can begin to take the first step towards implementing safe, ethical AI usage:

  1. Conducting risk assessments: Companies should start by conducting a comprehensive risk assessment of their AI systems to determine their classification under the Act's risk-based framework. This involves evaluating the intended use, potential impact, and level of interaction with individuals. High-risk AI systems will require additional compliance review and implementation.
  2. Following regulatory requirements: For AI systems classified as high-risk, companies must adhere to specific regulatory requirements outlined in the Act. This includes ensuring data governance and accuracy, implementing transparency measures, and facilitating human oversight. Documentation and record-keeping will also be essential for demonstrating compliance.
  3. Ethical AI practices: Beyond legal compliance, adopting ethical AI practices is essential. This involves integrating ethical considerations into the AI system's lifecycle, from design and development to deployment and use. Companies should establish ethical guidelines that align with the Act's objectives, focusing on fairness, accountability, and non-discrimination.
  4. Compliance management: Through the use of AI, organizations may need to invest in new technologies, processes, or personnel to meet compliance requirements. This could include developing or acquiring tools for monitoring and evaluating AI systems, training staff on compliance issues, and establishing internal controls regarding AI ethics and legality.
  5. Engagement with regulating bodies and key stakeholders: Open dialogue with regulators and stakeholders can provide valuable insights into compliance expectations and best practices. Companies should engage in industry forums, consultations, and collaborative initiatives to stay informed on regulatory developments.
  6. Regular monitoring and reporting: Companies must regularly monitor their AI systems for compliance with the Act and report any significant changes or incidents to the relevant authorities. This includes updating risk assessments and compliance measures as the AI system evolves or as new regulatory guidelines are established.
  7. Education and training: Employees must be trained on AI compliance and the specific requirements of the EU AI Act. Companies should implement comprehensive training programs for key staff involved in the development, deployment, and management of AI systems.

Ready to see
UpGuard in action?

Ready to save time and streamline your trust management process?