On 17 December 2025, the group of experts charged with developing guidance on transparency obligations under the EU Artificial Intelligence (AI) Act published the first draft of the Code of Practice on transparency of AI-generated content.

The draft Code outlines expansive measures for how companies can comply with Article 50 of the EU AI Act. It targets two groups:

  • Providers of generative AI systems (who must mark all AI-generated content)
  • Deployers (who must label deepfakes and public interest text)

While Article 50 won’t enter into effect until August next year, the draft Code is relevant to providers and deployers looking at their 2026 roadmap.

For providers: A ‘multilayered’ approach to watermarking

The draft Code takes the position that no single marking technique is currently sufficient. Consequently, the Code proposes a multilayered approach.

The draft Code recommends that providers of generative AI systems and general-purpose AI (GPAI) models implement:

  • Metadata embedding: Where possible, provenance information can be added into the metadata of content (e.g. a digital signature).
  • Interwoven watermarking: In addition, imperceptible watermarks should be added directly into the content (e.g. pixel-level modification) which can withstand typical processing steps like compression or cropping.
  • Fingerprinting/logging: Where necessary, given the nature of the service and risks arising from limitations in metadata embedding and watermarking techniques, providers should establish logging facilities or fingerprinting to verify outputs.

In terms of technical solutions, the draft Code outlines various considerations relating to effectiveness, reliability, robustness and interoperability. The draft Code does not, however, endorse a specific standard (e.g. C2PA).

The Code also suggests that providers should implement “detectors” for use by users and third parties (e.g. via API or a user interface). The draft Code suggests that detectors implemented by providers of GPAI models that can be integrated into downstream services should not simply be limited to detecting embedded watermarks, but also unmarked synthetic content.

A few additional points of interest:

  • Updating terms: The draft Code recommends explicitly prohibiting in terms of service and acceptable use policies the alteration or removal of watermarks.
  • Open-weight models: Despite certain exemptions for open-weight models in the AI Act, the draft Code suggests that such models should implement structural marking techniques encoded in the weights during training to facilitate downstream compliance.
  • Multimodal models: The draft Code suggests that for multimodal outputs, providers should enable detection of markings even when only one modality is altered.

For deployers: Deepfake disclosures

For those deploying AI systems – specifically regarding deepfakes and text of public interest –the Code suggests a standardised visual language for transparency:

  • What is in scope: The Code proposes to establish a taxonomy for determining what content is “deepfake” content, indicating it may include “fully AI-generated” content (autonomously generated) and “AI-assisted” content (human-authored but AI-modified, which is defined nonexhaustively to include actions like “face/voice replacement or modification” and “seemingly small AI-alterations,” such as “colour adjustments that change contextual meaning (e.g. skin tone)”).
  • An EU AI icon: The draft proposes a common EU-wide icon to visibly mark synthetic content. Until a final design is approved, the Code suggests an interim icon consisting of a two-letter acronym (e.g. “AI”). It is not clear how much traction this idea is likely to get.
  • Placement rules: The draft Code outlines different labelling placement rules for different types of content. In general, a labelling icon must be clear and distinguishable at the “first exposure”. For real-time video, it must be displayed persistently “where feasible”; for audio, there are requirements for audible disclaimers. The Code offers flexibility for artistic or satirical works, allowing for “non-intrusive” placement that does not hamper the enjoyment of the work.

Paperwork

The draft Code imposes a heavy set of requirements around governance and documented compliance – moving the expectation beyond mere technical solutions. Providers must maintain a comprehensive “compliance framework” describing their measures and testing results, while deployers must keep internal documentation of their labelling practices and, when relying on the editorial exemption, retain specific logs identifying the human reviewer and date of approval.

What’s next?

This document is a “first draft” but signals a clear and ambitious direction of travel. Written feedback must be submitted by 23 January 2026.

This Code will eventually serve as a key mechanism for demonstrating compliance with the EU AI Act’s transparency obligations. Companies operating in the EU should review these technical specifications to assess the readiness of their current watermarking and labelling infrastructure and consider making comments if they feel strongly about an aspect of the draft Code.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as "Cooley"). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction, and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. When advising companies, our attorney-client relationship is with the company, not with any individual. This content may have been generated with the assistance of artificial intelligence (Al) in accordance with our Al Principles, may be considered Attorney Advertising and is subject to our legal notices.