EU AI Act for Businesses

In 2024, the European Union passed Regulation (EU) 2024/1689. This legislation represents the first comprehensive framework governing artificial intelligence worldwide. The European Commission aims to foster trustworthy AI operations while ensuring comprehensive protection against harmful models.
Like the GDPR, the EU AI Act applies globally to organizations developing or deploying AI systems that affect individuals within the European Union.
A Legal Timeline
The road to the AI Act spanned years of extensive negotiation and iteration.
White Paper Published
The European Commission published the initial White Paper, providing the first formal AI policy signal.
Legislative Proposal
The Commission formally proposed the AI Act, setting the stage for global AI regulation.
Trilogue Agreement
The EU Council and Parliament reached consensus after intensive negotiation sessions.
Parliament Adoption
The European Parliament formally adopted the final legislative text.
Official Publication
The Act was published in the Official Journal of the EU.
Entry into Force
The legislation officially entered into force.
Prohibitions Enforceable
Prohibited AI practices and literacy obligations became strictly enforceable.
GPAI Obligations Enforceable
Transparency and systemic risk requirements applied to foundation model providers.
General Application
Broad enforcement for high-risk AI system obligations takes effect.
Understanding Risk-Based Regulation
The AI Act operates entirely on a risk-based tier system. Regulatory strictness scales with potential harm.
Unacceptable Risk: The legislation bans specific practices outright. Banned systems utilize subliminal manipulation, perform real-time biometric public surveillance without narrow law enforcement exceptions, or enact social scoring systems. These prohibitions took effect in February 2025.
High Risk: This category heavily regulates AI deployments in hiring, educational administration, credit scoring, law enforcement, and critical infrastructure. If a system screens loan applications or job candidates, it requires extensive compliance work before the August 2026 deadline.
Limited Risk: Systems interacting directly with individuals face distinct transparency obligations. Platforms like chatbots must disclose that the user is conversing with an AI.
Minimal Risk: Most standard AI-assisted productivity software resides here, facing minimal specific compliance requirements under the Act.
Managing High-Risk Operations
Organizations fielding high-risk models face extensive operational requirements.
Deploying teams must secure comprehensive technical documentation and enforce structured data governance practices to eliminate model bias. The Act severely restricts autonomous action, requiring clear human oversight checkpoints before an AI execution finalizes irreversible decisions—a key pillar of internal AI governance. Entities must register these systems within an EU database prior to deployment.
Penalties for non-compliance match the severity of the regulation. Deploying prohibited systems triggers fines reaching €35 million or 7% of annual worldwide revenue. General violations reach €15 million or 3% of annual revenue.
General Purpose AI Considerations
Deployers utilizing large foundational models, termed General Purpose AI (GPAI), encounter secondary obligations. GPAI providers navigate strict rules regarding training data transparency, copyright compliance, and systemic risk assessments.
As a deployer, integrating a GPAI-based tool shifts liability based on application. Using a foundation model in a high-risk application immediately imports stringent requirements for the deploying organization, irrespective of the underlying model's compliance.