Skip to main content
Regulation Analysis11 min read

EU AI Act GPAI Rules: What Foundation Model Developers Must Do by August 2025

The general-purpose AI (GPAI) model provisions of the EU AI Act are now in effect. Here's what developers and deployers of foundation models need to know.

EU AI Act GPAI Rules: What Foundation Model Developers Must Do by August 2025

The EU AI Act's provisions for general-purpose AI (GPAI) models took effect in August 2025. If you develop, fine-tune, or deploy foundation models / large language models in the EU, here's what you're obligated to do.

What Is a GPAI Model Under the EU AI Act?

A GPAI model is an AI model trained on large amounts of data, designed for general competence, and capable of being used in a wide range of downstream tasks. This includes:

  • Large language models (GPT, Claude, Llama, Gemini, etc.)
  • Multimodal foundation models
  • Code generation models
  • Diffusion models for images/video

If you train or fine-tune such a model and make it available in the EU (including through an API), you are a GPAI model provider subject to these rules.

Obligations for All GPAI Providers

1. Technical Documentation

You must prepare and maintain technical documentation covering:

  • Architecture, training approach, and objectives
  • Computational resources used (training compute in FLOPs)
  • Training data types, sources, and filtering
  • Model evaluation results, including on standardized benchmarks
  • Known limitations and risks

2. Copyright Compliance Summary

Provide a summary of your training data that is sufficiently detailed for copyright right holders to understand whether their content was included.

3. AI-Generated Content Marking

If your model produces synthetic content (text, images, audio, video), you must implement machine-readable watermarking or similar technology so that AI-generated content can be detected.

4. Information to Downstream Providers

If you provide your model to other businesses that integrate it into their products, you must give them enough information to comply with their own obligations under the Act.

Additional Obligations for Systemic-Risk GPAI Models

Models trained using more than 10^25 FLOPs (or classified by the Commission as systemic risk) face additional obligations:

  • Adversarial testing / red teaming — before and after major updates
  • Incident reporting — report serious incidents to the European AI Office within 30 days
  • Cybersecurity measures — protect model weights and training infrastructure
  • Energy reporting — report energy consumption during training and inference

As of 2025, models likely in this category include GPT-4 class models and above. The EU AI Office will publish guidance on classification.

Obligations for GPAI Deployers

If you deploy someone else's GPAI model in a product:

  • Ensure your vendor has provided required documentation
  • Maintain records of which GPAI models your product uses
  • Implement AI-disclosure requirements for users (if your product generates synthetic content)
  • Register as a deployer if your application is high-risk under the Act

Timeline

DateMilestone
August 2024EU AI Act enters into force
February 2025Prohibited AI practices enforceable
August 2025GPAI model rules enforceable
August 2026High-risk AI system rules enforceable

Resources

Related Regulations

EU AI ActGPAIFoundation Models

Not sure which AI laws apply to your business?

Use our free compliance checker — answer 4 questions, get instant results.

Check My Compliance

Not legal advice. This article is for informational purposes only. Always consult a qualified attorney for compliance decisions.