Skip to main content
Regulome
Search regulations…⌘K
For providersFree Checker
The Ledger · Saturday, 28 February 2026Issue № 15All issues →

AI Compliance Hub · newsroom

Regulation Analysis · 9 min read

The EU AI Act GPAI Code of Practice Finally Drops: What It Means for AI Companies

The EU AI Office has published the General-Purpose AI Code of Practice. Here’s what it requires, who it applies to, and what foundation model developers must do before the August 2025 deadline.

The EU AI Act GPAI Code of Practice Finally Drops: What It Means for AI Companies
Regulation AnalysisIllustration · AI Compliance Hub

The EU AI Office published the first draft of the General-Purpose AI (GPAI) Code of Practice in November 2024. After four iterative drafts and input from over 1,000 stakeholders, the final code arrived in early 2025. GPAI obligations under the EU AI Act became enforceable on August 2, 2025.

If your company develops, fine-tunes, or deploys foundation models — or if you build products on top of them — this code affects you.


What the GPAI Code of Practice Is

The Code of Practice is a voluntary but practically mandatory compliance instrument. Under Article 56 of the EU AI Act, providers of GPAI models can demonstrate compliance with their obligations by adhering to a code of practice approved by the EU AI Office.

“Voluntary” is a technical term here. Providers who don’t follow the code still have to meet the underlying legal obligations — they just have to demonstrate compliance another way. For most companies, following the code is the path of least resistance.

The code covers two tiers of GPAI providers:

Tier 1: All GPAI providers must follow rules on transparency, copyright compliance, and documentation.

Tier 2: Systemic risk GPAI providers — models trained with more than 10²µ FLOPs — face additional requirements including adversarial testing, incident reporting, and cybersecurity measures.


Who This Applies To

The GPAI provisions apply to providers who place general-purpose AI models on the EU market, regardless of where the provider is based. If you’re a US company with models used in Europe, you’re in scope.

In scope:

  • Companies releasing foundation models (GPT-class, Claude-class, Llama-class, image generation models, etc.)
  • Companies that release GPAI models via API accessible in the EU
  • Companies fine-tuning and re-releasing GPAI models with substantial changes

Out of scope:

  • Pure deployers who don’t modify the model (though you may inherit some obligations from providers)
  • Internal research models not placed on the market
  • Open-source models meeting specific transparency criteria get reduced obligations

What Tier 1 Requires (All GPAI Providers)

Technical Documentation

Providers must prepare and maintain technical documentation before placing a model on the market. The Code of Practice specifies this must include:

  • Model architecture description
  • Training methodology and compute used
  • Training data description (sources, collection methods, filtering)
  • Evaluation results (benchmarks, capabilities, limitations)
  • Known hazards and mitigation measures

This documentation must be made available to downstream providers who integrate your model and to the EU AI Office on request.

GPAI providers must implement a policy for copyright compliance, including:

  • A “crawler” or bot policy that respects rights-holder opt-outs where technically feasible
  • Documentation of the copyright compliance policy
  • Availability of the policy to rights holders (typically via published robots.txt or equivalent)

The EU AI Act requires providers to make publicly available a “sufficiently detailed summary” of training data used. The Code of Practice operationalizes this requirement.

Transparency to Downstream Operators

When other businesses integrate your GPAI model, you must provide them with adequate information about:

  • What the model can and cannot do
  • What safeguards are built in
  • What additional safeguards downstream operators should add for their use case

What Tier 2 Adds (Systemic Risk Models)

Models trained above the 10²µ FLOP threshold — currently GPT-4 class and above — face additional requirements.

Adversarial Testing

Providers must conduct adversarial testing (“red-teaming”) before release and on an ongoing basis. The Code specifies:

  • Testing must cover cybersecurity risks, biological and chemical risks, and societal harms
  • Providers must use qualified internal or external testers
  • Results must be documented and shared with the EU AI Office

Incident Reporting

Systemic risk providers must:

  • Track and classify “serious incidents” caused by their model
  • Report to the EU AI Office within 72 hours of discovering a serious incident
  • Cooperate with the AI Office’s investigations

Cybersecurity

Providers must implement cybersecurity measures commensurate with the risks of their model, documented and auditable.


Timeline

DateWhat happens
Aug 2, 2025GPAI obligations legally enforceable
Sept 2025Code of Practice finalized by AI Office
Oct 2025Providers can formally sign the code and begin compliance
OngoingAnnual reviews of compliance

What to Do Now

If you provide a GPAI model:

  1. Assess whether you’re Tier 1 or Tier 2
  2. Begin assembling technical documentation now — it takes months to compile properly
  3. Review your training data acquisition process for copyright compliance gaps
  4. If Tier 2, conduct an adversarial testing program and set up an incident tracking system
  5. Sign the GPAI Code of Practice through the EU AI Office registry

If you build on GPAI models:

  1. Ask your model provider for the documentation they’re required to provide
  2. Review it for risks relevant to your specific use case
  3. Add appropriate safeguards on your application layer for your specific use

The GPAI Code of Practice is the EU’s way of operationalizing broad legal obligations into concrete actions. Providers that engage with it early will find the path to compliance much smoother than those who wait for enforcement to define the standard.

EU AI ActGPAIFoundation ModelsCode of Practice
AI Compliance Hub editors
The editorial desk covers AI and cyber regulation across the US, EU, and UK. Tips? editors@aicompliancehub.com
Not legal advice

This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →

Keep the Ledger coming.

A weekly edition of new regulations, enforcement actions, and compliance deadlines — delivered every Friday. Free forever. No tracking pixels.

Subscribe free →

Read by 4,000+ compliance teams · Cancel any time