EU AI Act Compliance: What GRC Leaders Need to Know Now

The EU AI Act marks a new milestone in AI governance. Learn key deadlines, requirements, and actionable compliance strategies to stay ahead.

AI EU Act

The EU AI Act is no longer on the horizon—it’s happening. Enforcement is moving forward, and the deadlines are already locked in. For governance, risk, and compliance professionals, this is the time to take action, not take chances.

Whether you’re a provider, deployer, or third-party partner, this sweeping regulation is reshaping the way AI governance must be done across Europe. The clock is ticking to evaluate AI risks, align internal processes, and ensure organizational readiness.

Let’s break down what’s enforceable now, what’s next, and how GRC teams can lead their companies through one of the most important regulatory shifts of the decade.

What the EU AI Act Covers

At its core, the EU AI Act is a risk-based regulatory framework. It classifies AI systems into four categories:

  • Unacceptable Risk – Prohibited entirely (e.g., social scoring systems)
  • High Risk – Requires documentation, audits, and human oversight (e.g., healthcare, employment, public services)
  • Limited Risk – Must meet transparency obligations (e.g., chatbots)
  • Minimal Risk – Low regulatory burden (e.g., spam filters)

Each category comes with specific obligations. Certain practices are flat-out prohibited—like real-time biometric surveillance in public spaces (unless narrowly exempted). Others, like AI for law enforcement or biometric ID verification, are considered high-risk and heavily regulated.

Meanwhile, R&D activities, military applications, and some open-source GPAIs may be exempt, depending on use case and deployment status.

Tip: If you’re in research or product prototyping, check for carve-outs—but still document your governance safeguards.

How do I know if my system is high-risk?

If your AI touches decisions about people’s rights, safety, or access to services, chances are you’re in scope. You’ll also need to determine whether your model qualifies as a general-purpose AI (GPAI)—a type of model trained for many downstream uses. GPAIs come with their own set of governance rules, especially if they pose systemic risks.

Tip: Map your AI portfolio against the risk tiers in Annex III of the Act. If you’re unsure, assume scrutiny and prepare accordingly.

Does the EU AI Act Apply to My Organization?

Most likely, even if you’re not in the EU. The Act applies extraterritorially. If your organization sells, deploys, or provides AI systems that affect people in the EU, you’re likely in scope. That includes U.S.-based companies offering SaaS tools or open-source GPAIs online. Your role—whether provider, deployer, importer, or distributor—determines your exact obligations.

Best practice: Clarify internal roles. Identify who in your organization is legally responsible under the AI Act, especially if you rely on vendors or white-label solutions.

Penalties for EU AI Act Non-Compliance

The penalties are steep. Violations can result in massive fines, placing the AI Act in the same enforcement tier as GDPR, with a broader impact.

  • Up to €35 million or 7% of global turnover for violating banned practices
  • Up to €15 million or 3% of turnover for compliance failures (e.g., GPAI documentation gaps)
  • Public naming, product recalls, or market bans for repeated or systemic noncompliance

Beyond regulatory fines, companies that fail to comply with the EU Act may face civil redress and reputational injuries. Citizens have the right to submit complaints about AI systems and “receive explanations about decisions based on high-risk AI systems that affect their rights.” These complaints will depend on AI literacy and the savviness of the citizens harmed by these technologies.

The good news? Following the EU’s GPAI Code of Practice can mitigate risk and demonstrate proactive alignment—something authorities may consider during enforcement.

Your shield: Align with the voluntary Code of Practice now to avoid exposure later.

Key EU AI Act Enforcement Deadlines: And What to Do About Them

Enforcement is staggered—but already underway.

Deadline

Requirement

Feb 2, 2025  AI Literacy training is required for all users and providers of AI systems
Aug 2, 2025  Compliance is required for General-Purpose AI (GPAI) providers
Aug 2, 2026  High-risk AI obligations take effect
Aug 2, 2027  Rules expand to broader Annex II systems
By 2030  Full integration into large-scale EU systems must be complete

Don’t wait. Even limited-risk systems may require transparency disclosures before these deadlines. Let’s expand on what these requirements may mean for your organization.

AI Literacy Requirements: February 2, 2025

All organizations deploying or providing AI systems — regardless of risk level — must ensure that users are appropriately trained. This applies to employees, contractors, and service providers.

Effective AI literacy training should cover:

  • How AI systems work and what their limitations are
  • How to evaluate AI-generated outputs
  • How to manage legal and ethical considerations
  • How to maintain human oversight over automated decisions

Documentation alone won’t cut it. The training must be active, role-specific, and woven into your compliance program.

Still not sure about AI Literacy Requirements? See how the European Commission defines the specific provisions in the AI Act.

Need help getting started?

Mitratech’s AI training solutions are designed to meet these evolving requirements.

Learn More

General-Purpose AI (GPAI) Compliance – August 2, 2025

GPAI models—those capable of performing a range of tasks and used in diverse applications—face a specific set of obligations under the AI Act.

If your organization develops or places GPAI models on the EU market, you’ll need to:

  • Maintain detailed technical documentation
  • Publish summaries of training data
  • Implement copyright compliance policies
  • Monitor for systemic risks
  • Track and report serious incidents
  • Follow cybersecurity best practices

Although the GPAI Code of Practice is voluntary, it can help demonstrate good faith compliance and reduce potential penalties.

High-Risk Systems Under Full Oversight: August 2, 2026

By this date, AI systems considered high-risk must meet comprehensive compliance requirements. This includes:

  • Formal risk assessments
  • Robust documentation
  • Real-time human oversight
  • Conformity assessments before deployment

Each EU member state must also offer at least one regulatory sandbox to support the safe testing of high-risk systems.

Expanded Compliance Requirements: August 2, 2027 and Beyond

From August 2027 onwards, AI systems listed in Annex II (including those with medium-to-high risk applications) will also be covered.

By then, every AI system deployed in the EU—regardless of tier—must demonstrate:

  • Transparent operation
  • Ongoing risk mitigation
  • Human-in-the-loop decision-making

Looking ahead to 2030, AI systems integrated into large-scale EU IT infrastructure will face stricter governance and legal constraints related to security, justice, and civil liberties.

EU AI Act Compliance Measures

If you’re developing or placing AI systems on the EU market, you may need to:

  • Maintain technical documentation
  • Conduct conformity assessments for high-risk systems
  • Set up quality and risk management systems
  • Train your workforce on AI literacy
  • Publish training data summaries and ensure copyright compliance

For high-risk models, detailed logging, human oversight, and post-market monitoring are required. GPAI providers must also mitigate systemic risks, enforce cybersecurity protocols, and report serious incidents.

Next step: Develop a compliance roadmap aligned with your risk classification. Don’t wait for enforcement to start before documenting your controls.

AI Act Action Plan: How GRC Leaders Can Prepare Today

To meet the EU AI Act’s deadlines and strengthen organizational readiness, consider the following actions:

Map Your AI Risk Profile
Catalog all AI systems in use, categorize them by risk tier, and document compliance needs for each.

Build Role-Based AI Literacy Programs
Develop training paths tailored to technical, managerial, and operational roles. Include legal, ethical, and practical dimensions of AI use.

Integrate AI Oversight Into GRC Programs
Embed AI controls into your broader risk and compliance frameworks. Consider adopting standards like ISO 42001 or the NIST AI RMF to support program development.

Strengthen Third-Party Governance
Audit your AI vendors and partners. Include EU AI Act requirements in contracts and monitor downstream compliance for GPAI applications.

Track Regulatory Shifts
The Commission may soon streamline rules for small and mid-sized companies. Stay informed to remain agile.

Ready to Lead in AI Compliance?

The EU AI Act marks a pivotal moment in the future of AI governance. It’s more than a compliance requirement; it builds trust, strengthens resilience, and enhances operational integrity.

GRC professionals play a central role in building responsible AI ecosystems that are safe, ethical, and resilient. Mitratech offers integrated solutions to help you navigate this landmark regulation. Download Mitratech’s AI Governance Brochure or request a demo of our connected risk and compliance solutions to get started.

Let’s build AI governance that works for people, organizations, and regulators.