Who does the EU AI Act Apply To? A complete guide for businesses

The EU AI Act sets strict rules for AI providers and users. Learn who it applies to, key risk categories, compliance steps, and what businesses must do to prepare.

The EU AI Act is reshaping how businesses develop and deploy artificial intelligence (AI) systems across Europe and beyond. As the world’s first comprehensive AI regulation, it establishes mandatory requirements for AI providers and users, backed by significant penalties for non-compliance.

Understanding who the Act applies to and what it requires is no longer optional for businesses operating in or selling to the EU market. 

This guide answers the question ‘who does the EU AI Act apply to?’, breaks down the key requirements, risk categories, and compliance steps to help your organisation navigate this landmark legislation. Plus, you can learn about our AI red teaming and penetration testing services to help meet the EU AI Act’s cybersecurity standards. 

What is the EU AI Act?

The EU AI Act is a comprehensive legal framework designed to regulate AI systems based on their potential risk to fundamental rights, safety, and society. 

Passed in 2024, it represents the first major regulatory attempt to govern AI at scale, establishing clear rules for developers, providers, and users of AI systems. 

This EU AI Act summary outlines how the Act takes a risk-based approach, categorising AI applications into four tiers: 

  • Unacceptable risk (banned)
  • High risk (heavily regulated)
  • Limited risk (transparency requirements)
  • Minimal risk (largely unregulated)

This framework aims to foster innovation while protecting citizens from AI systems that could cause harm, manipulate behaviour, or discriminate against individuals.

By establishing mandatory standards for high-risk AI systems and prohibiting certain applications entirely, the EU AI Act directly impacts how businesses design, deploy, and monitor AI technologies within the European market.

EU AI Act timeline

The EU AI Act was formally adopted in May 2024 and entered into force on 1 August 2024. However, the regulation follows a phased implementation approach, giving businesses time to prepare for compliance: 

  • 2 February 2025 – prohibitions on unacceptable-risk AI systems (i.e., banned applications are already illegal)
  • 2 August 2025 – requirements for general-purpose AI models (GPAIs)
  • 2 August 2026 – general application (majority of the Act’s provisions will come into effect)
  • 2 August 2027 – obligations for high-risk AI systems become fully enforceable

This staggered timeline allows organisations to assess their AI systems, implement necessary controls, and ensure compliance before penalties apply. 

Businesses deploying AI in the EU should be actively preparing now (particularly those operating high-risk systems or developing foundation models) to avoid scrambling as deadlines approach.

EU AI Act risk categories

The EU AI Act structures its requirements around four risk categories, each carrying different obligations and restrictions. 

Understanding which category your AI systems fall into is the first step toward compliance, as obligations scale dramatically with risk level. 

Unacceptable risk

Unacceptable risk systems are banned outright. These include AI that:

  • Manipulates human behaviour through subliminal techniques
  • Exploits vulnerabilities of specific groups 
  • Enables social scoring by governments 
  • Conducts real-time biometric identification in public spaces (with limited law enforcement exceptions)

Organisations deploying these systems face immediate prohibition and significant penalties.

High-risk 

High-risk AI systems face the strictest regulatory requirements. This category includes AI used in: 

  • Critical infrastructure
  • Education
  • Employment 
  • Essential services
  • Law enforcement
  • Migration management
  • Justice administration

High-risk systems must undergo conformity assessments, maintain technical documentation, implement human oversight, ensure transparency, and meet accuracy and AI cybersecurity standards. Biometric identification systems, CV-screening tools, and AI used in credit scoring all fall into this category.

Limited risk

Limited-risk systems must meet transparency obligations. Users must be informed when interfacing with AI systems like chatbots, deepfakes, or emotion recognition tools. 

While less burdensome than high-risk requirements, these transparency rules still require clear disclosure and user awareness.

Minimal risk

Minimal-risk AI includes most everyday applications like spam filters, inventory, management systems, or AI-enabled video games. These systems face no specific obligations under the Act, though general data protection and consumer laws still apply.

Who does the EU AI Act apply to?

The EU AI Act has a broad territorial reach, extending well beyond organisations physically located in the EU. If your business develops, deploys, or benefits from AI systems that affect people in the EU, you likely fall within its scope.

Providers

Providers of AI systems placing products on the EU market are directly regulated regardless of where they’re based. This means that a US-based company offering an AI recruitment tool to EU customers must comply with the Act’s requirements.

Deployers 

Deployers (organisations using AI systems in the EU) are also covered, even if the AI was developed elsewhere. If you’re using a third-party AI tool to screen job applicants in Germany, you’re responsible for ensuring it meets regulatory standards.

Importers and distributors

Importers and distributors bringing AI systems into the EU market share responsibility for compliance, and authorised representatives may need to be appointed by non-EU providers.

GPAIs

The Act also covers general-purpose AI model developers, including large foundation models, imposing transparency and risk management obligations on those creating widely-used AI systems.

Extraterritorial

Importantly, the Act applies extraterritorially when AI systems produce outputs used in the EU, even if the system itself operates elsewhere. A facial recognition system deployed in the UK but processing data of EU citizens could trigger compliance requirements.

This extraterritorial reach mirrors GDPR’s approach, creating compliance obligations for any organisation whose AI impacts EU residents. 

Small-scale research projects and purely personal use remain exempt, but businesses of any size operating commercially in the EU should assume the Act applies unless clearly excluded.

EU AI Act compliance: Guidance for businesses

Achieving compliance with the EU AI Act requires structured planning, clear ownership, and ongoing vigilance. 

This section provides the EU AI Act explained through practical compliance steps that businesses can follow to meet regulatory requirements effectively.

Conduct an AI inventory

Start by mapping every AI system your organisation develops, deploys, or relies on. Include third-party tools, APIs, and embedded AI within larger platforms. Document each system’s purpose, data inputs, decision-making processes, and geographic reach. This inventory forms the foundation of your compliance programme and helps identify which systems fall under the Act’s scope.

Classify your AI systems by risk level

Once you’ve mapped your AI estate, categorise each system according to the EU AI Act’s risk framework. High-risk systems – such as those used in recruitment, credit decisions, or critical infrastructure – require immediate attention. 

Misclassification can lead to significant penalties, so if you’re uncertain, seek legal or technical expertise. Documenting your classification rationale protects your organisation during audits.

Implement governance and accountability structures 

Assign clear ownership for AI compliance within your organisation. Establish cross-functional teams involving legal, IT, security, and deployment, ensuring they align with the Act’s requirements for risk management, transparency, and human oversight. 

Regular reviews and updates keep these policies current as AI technology and regulations evolve.

Prioritise high-risk system requirements

For high-risk AI, compliance demands technical rigour. Conduct conformity assessments, document training data and model performance, implement human oversight mechanisms, and ensure transparency in how decisions are made.

LLM penetration testing provides a thorough and documentable method of proactively uncovering injection vulnerabilities, empowering your organisation to implement LLMs without the risk of potential data leaks or operational disruptions. Regular security assessments help validate that your AI systems meet the Act’s cybersecurity and robustness standards.

Maintain detailed documentation

The EU AI Act requires detailed technical documentation for high-risk systems, including information on training data, model architecture, risk assessment, and performance metrics. 

Keep records of conformity assessments, incident logs, and updates made to systems over time. This documentation proves compliance during regulatory inspections and supports internal accountability. 

Establish incident response and monitoring processes

Deploy continuous monitoring to detect when AI systems produce unexpected or harmful outputs. Create incident response procedures specifically for AI failures, including escalation paths, stakeholder notification, and remediation protocols.

Logging and alerting mechanisms should capture anomalies in real-time, enabling rapid intervention before minor issues become regulatory breaches.

Use third-party expertise where needed

The EU AI Act’s technical and legal complexity can overwhelm internal teams. Consider working with external consultants, legal advisors, or testing providers specialising in AI compliance. 

Security assessments, such as AI red teaming and penetration testing, can validate that your systems are resilient against adversarial attacks and meet regulatory standards. External audits also provide independent verification for stakeholders and regulators.

Plan for ongoing compliance, not one-off fixes

The EU AI Act isn’t a static checklist. As your AI systems evolve and new models are deployed, compliance requirements shift. 

Treat AI governance as a continuous process, building regular reviews, retraining, and updates into your development lifecycle. Staying ahead of regulatory changes and emerging best practices keeps your organisation compliant and competitive. 

Frequently Asked Questions

Longbluediv

Got a question you need answering? Our FAQs should help guide you

Yes, the EU AI Act was formally adopted in May 2024 and entered into force on 1 August 2024. However, different provisions apply at different times. Prohibitions on unacceptable-risk AI systems took effect in February 2025, while high-risk system requirements become fully enforceable from August 2027.

The EU AI Act does not directly apply to the UK post-Brexit. However, UK businesses deploying AI systems that affect EU residents or selling AI products into the EU market must comply with the Act. The UK is developing its own AI regulatory framework, but currently follows a sector-specific approach rather than comprehensive legislation.

The Act primarily regulates AI systems placed on the EU market or whose outputs are used within the EU. If an AI system is deployed outside the EU but processes data or produces decisions affecting individuals physically located in the EU, compliance obligations may apply. The Act's territorial scope mirrors GDPR's extraterritorial reach.

Recent research reveals that the average breach now costs $4.35 million, with global cybercrime expenses projected to surge by 23% annually. By 2027, these costs could reach a staggering $23.84 trillion per year. This alarming trend underscores the critical need for robust cybersecurity measures.

Penetration testing is a vital cyber security solution. As cyber attacks grow more sophisticated and frequent, proactive testing of your defences becomes even more important. Safeguard against potential breaches and avoid devastating financial impacts. Implementing pentesting services is no longer optional – it’s a necessity for many businesses seeking to protect their assets and reputation.

Related Articles