European Tech Regulation

The Artificial Intelligence Act

The AI Act is the first comprehensive regulatory framework for AI systems placed on or used within the EU market. It applies to all actors in the AI value chain, regardless of their location, as long as their AI systems impact individuals in the EU or where the output of the system is used in the EU. The AI Act sets rules promoting ethical AI use and strengthened consumer protection, while also fostering innovation and facilitating market access.

Affected businesses

The AI Act applies to AI systems using a risk-based approach, classifying them into prohibited, high-risk, limited-risk, and minimal-risk categories, with different rules and obligations depending on the level of risk posed.

Almost every single organization developing, deploying or using an AI system must comply with the AI Act, regardless of whether such organisation is established in the European Union.

The AI Act has an extraterritorial effect, which means that it also applies to providers or deployers of AI systems established or located outside the EU where the output of the system is used in the EU.

Key impacts

The AI Act introduces a multi-tiered and gradual scheme of requirements and obligations depending on the level of risk posed to health, safety and fundamental rights.

Whereas AI systems falling under the ‘unacceptable risk’ category are forbidden, those falling under other categories, depending on the circumstances, would need to comply with different obligations, such as:

  • Providing adequate transparency and provision of information to users.
  • Ensuring human oversight to mitigate risks and ensure ethical use.
  • Providing comprehensive technical and security measures.
  • Upholding an obligation to undergo conformity assessments.

Enforcement

Depending on the nature of the infringement, up to 35,000,000 euros – or, up to 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher.

Key timings

The AI Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 August 2026, with some exceptions:

  • prohibitions and AI literacy obligations entered into application from 2 February 2025
  • the governance rules and the obligations for general-purpose AI models become applicable on 2 August 2025
  • the rules for high-risk AI systems - embedded into regulated products - have an extended transition period until 2 August 2027

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction, and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may have been generated with the assistance of artificial intelligence (AI) in accordance with our AI Principles, may be considered Attorney Advertising and is subject to our legal notices.