All posts

What Is the EU AI Act?

Julie Peterson
Product Marketing
Table of Contents

The technological advancement of artificial intelligence (AI) is moving at unprecedented pace. The innovation associated with this rapid development has innumerable benefits to humankind. AI models can analyze X-rays, MRIs, CT scans, and even retinal images with remarkable speed and accuracy to detect tiny anomalies, like early signs of cancer, that a human may be unable to detect. AI is being used to analyze climate data like satellite imagery, ocean currents, atmospheric chemistry, energy consumption patterns, and more at incredible scale to predict major weather events. AI is being used by global organizations to optimize complex supply chains to predict events - geopolitical, weather, and more - that may disrupt supplies and increase costs. 

For all the promise of good, however, the use of AI does have risk. The fear is that if AI is left unchecked, it could be used for harm. Examples of this include exploitation of personal privacy, making decisions that are biased or discriminatory, and spreading misinformation. For these reasons, the European Union (EU) is stepping in.

The EU has passed the first comprehensive law governing the development and deployment of AI. The EU AI Act is designed to implement AI accountability. The Act impacts any company that is building, selling, or using AI in the EU, or whose AI systems are placed on the EU market or affect people in the EU, regardless of where the company itself is located. As a result, the EU AI Act has a broader impact than just companies based in the EU.

At its core, the EU AI Act takes a risk-based approach to AI safety and security. The Act identifies four levels of risk. Requirements and guardrails scale as the potential for harm increases. 

EU AI Act risk levels

The EU AI Act has four risk levels when assessing AI safety and security. 

Level 1 – minimal risk

Level 1 is AI that poses little or no threat to safety. AI use cases that fall under level 1 include spam filters for email, non-playable characters in video games, or recommendation engines for movies, music, or shopping. Activities that fall under level 1 remain largely unregulated because they don’t affect user safety or involve critical decision-making. Developers can continue innovating with little interference, but they must still comply with existing laws outside the EU AI Act.

Level 2 – limited risk

Level 2 includes AI systems like chatbots and deepfakes that pose some level of risk. At this level, the emphasis is on providing transparency. For example, users should understand when they are talking to a chatbot and not a human. A deepfake video should carry a visible disclosure that the content is AI-generated. Trust is an important component here. The goal is to ensure that users understand when they are engaging with AI-created output and are not being manipulated.

Level 3 – high risk

Level 3 is where protections get serious. Any AI that influences critical decisions falls into this category. Examples include diagnostic tools in healthcare, recruitment systems, credit scoring, and systems used in law enforcement. 

High-risk AI must go through strict safety and security checks. This includes:

  • A continuous risk assessment process.
  • High-quality, representative training data to prevent bias.
  • Detailed logs to ensure accountability.
  • Human oversight is built into operations so that people, not machines, have the final say in important decisions.

Level 4 – unacceptable risk

Level 4 risk is any AI use that is considered so harmful to fundamental rights, safety, or democracy that it is banned outright in the EU. This includes exploitation of vulnerable groups like targeting children or the elderly for harmful purposes, real-time biometric surveillance in public spaces, social scoring of citizens’ behavior by the government, or predictive policing tools based on profiling rather than actual criminal acts. The EU has decided that these practices are too dangerous to their citizens’ fundamental human rights to be allowed at all.

EU AI Act implementation timeline

The EU AI Act is already in effect. Enforcement is rolling out in stages over the next several years:

  • August 1, 2024: The EU AI Act entered into force.
  • February 2, 2025: Prohibitions on unacceptable practices take effect.
  • August 2, 2025: New rules for general-purpose AI (GPAI) models go live. Models already on the market have until August 2, 2027 to comply.
  • August 2, 2026: High-risk AI system requirements take effect.
  • August 2, 2027: AI embedded in regulated products (like medical devices) must comply.

The phased rollout is on an accelerated timeline, giving organizations some runway, but not much. Those organizations with active AI deployments should be well on their way to meeting compliance requirements.

EU AI Act fines

The EU AI Act has defined significant fines for noncompliant organizations. For prohibited practices, penalties can reach €35M or 7% of global annual turnover. Noncompliance with level 3/high-risk requirements can cost organizations up to €15M or 3% of global turnover. Supplying misleading information can trigger fines of up to €7.5M or 1% of turnover.

The law acknowledges the limited resources of small and medium-sized businesses. As a result, fines for organizations of this size are capped at the lower end of the ranges. Despite this, even reduced penalties can devastate a startup. Compliance should not be taken lightly.

Who must adhere to these regulations?

The EU AI Act identifies three main actors that must adhere to the regulations. This includes: 

  • Providers: People or organizations that develop AI systems or general-purpose models and bring them to market.
  • Deployers: Organizations that use AI systems, such as a bank adopting an AI-driven credit scoring tool.
  • Importers: Companies bringing AI systems from outside the EU into the EU market.

The Act doesn’t stop at Europe’s borders. If a foreign company sells an AI system that produces outputs used in the EU, that company must comply. Providers outside the EU will need local representatives to coordinate and meet compliance requirements.

Exemptions exist. Purely personal use of AI or AI used solely for scientific research falls outside the scope. The law is targeting commercial and institutional use cases, not hobbyists or labs.

The impact of the EU AI Act

This legislation is designed to protect EU citizens and residents from potential harm caused by unregulated AI. The goal is to build trust in AI systems. Users will have more confidence knowing that AI systems meet stringent data quality, transparency, and oversight standards. Providers who invest in compliance will stand out as trustworthy in a crowded market.

The Act also requires AI literacy training to promote responsible AI use. This should result in fewer reckless deployments and more informed oversight, which ultimately is good for everyone.

The truth is that compliance comes at a price. Detailed documentation, monitoring, and governance frameworks all require time and money. For large enterprises, this may be a manageable cost of doing business. For startups, it could be a heavier lift.

The challenge will be balancing innovation with regulation. Some worry that burdensome rules could slow European competitiveness in AI. Others argue that without such guardrails, public trust will collapse, stalling innovation.

The global impact of the EU AI act

The GDPR effect

Just as GDPR reshaped global privacy practices, the EU AI Act is likely to become a de facto standard for AI governance. Many companies will default to EU compliance worldwide rather than maintain separate standards.

Other governments are watching closely. The UK, Canada, and the US are all exploring their own AI regulations. The EU’s risk-based model provides a ready-made framework that others can adapt.

Market impact

Compliance could become a competitive advantage. Buyers may prefer vendors whose AI systems come with the assurance of meeting EU standards. In the long run, trustworthiness may be as important a differentiator as performance.

Meeting compliance standards: what organizations should be doing now

There are a number of steps organizations should already be undertaking to meet the requirements of this law:

  • Conduct an AI inventory: Identify where AI is in use, especially in high-risk areas. Start with the most critical systems first.
  • Assess data quality: Training data should be representative and free from bias. This is both a compliance requirement and a performance booster.
  • Test models, applications, and agents: Implement a comprehensive and continuous testing program that identifies both safety issues like bias and discrimination and security issues. 
  • Build governance frameworks: Appoint responsible leaders and integrate AI oversight into existing risk and compliance processes.
  • Improve AI literacy: Train employees who interact with AI to understand its limits, risks, and compliance obligations.
  • Establish risk assessment processes: Form a cross-functional AI board to evaluate risks and document defensible decisions.
  • Prepare for transparency obligations: Chatbots, generative AI tools, and GPAI models all require disclosures and documentation.

Organizations should be well on their way to implementing a security plan. Doing so will not only avoid fines, but also earn customer trust.

A new era of AI accountability

The EU AI Act is ushering in a new era of AI accountability. It divides the field into clear categories of risk and sets out obligations that scale with the potential for harm. For businesses, it represents both a compliance challenge and an opportunity to differentiate through trust.

Just as GDPR became the global standard for privacy, the EU AI Act could become the global standard for AI governance. Companies that treat compliance as more than a checkbox will be better positioned to thrive in an AI-powered future.

How TrojAI can help

Our best-in-class security platform for AI protects AI models, applications, and agents both at build time and run time. With support for agentic and multi-turn attacks, TrojAI Detect automatically red teams AI models to safeguard model behavior and deliver remediation guidance at build time. TrojAI Defend is our GenAI Runtime Defense solution that protects enterprises from threats in real time.

By assessing model behavioral risk during development and protecting it at run time, we deliver comprehensive security for your AI models, applications, and agents.

Want to learn more about how TrojAI secures the world's largest enterprises with a highly scalable, performant, and extensible solution?

Learn more at troj.ai now.