TLDR : The European Commission published the final version of the code of good practices for general-purpose AI models to help providers comply with the AI Act. It focuses on transparency, copyright, and safety, and aims for gradual implementation starting in 2026.
After a third draft was published last March, the European Commission presented yesterday the final version of the code of good practices for general-purpose AI models (GPAI). This voluntary framework aims to help providers comply with the AI Act obligations related to these models, which will come into effect on August 2nd.
Developed by 13 independent experts, with contributions from over 1,000 stakeholders (model providers, SMEs, academics, AI security experts, rights holders, and civil society organizations), the code is structured around three chapters.
The first two chapters of the code, transparency and copyright, apply to all providers of general-purpose AI models. In contrast, the third section, dedicated to safety and security, targets a narrower subset of so-called advanced models that may present systemic risks. These include large-scale models such as GPT-4 (OpenAI), Gemini (Google DeepMind), or Claude (Anthropic), whose general capabilities, versatility, and scalability pose unprecedented governance challenges.
The identified systemic risks include, for example, the generation of highly persuasive or misleading content, bypassing cybersecurity systems, facilitating malicious activities, including in the chemical or biological domains, or a loss of human control over the effects of generated responses. In this context, the code recommends a series of risk management practices, ranging from technical robustness to enhanced human oversight.
The transparency chapter of the code offers a simplified documentation form that allows providers to easily fill in the necessary information in one place. The copyright section provides them with practical solutions to implement a policy compliant with EU copyright law.
The upcoming publication of official guidelines, scheduled before the August 2nd deadline, is expected to clarify the scope of the text and specify the modalities: qualification of GPAI, identification of their providers, and assessment of systemic risk.
While the entry into force of the rules remains set for early next month, despite calls for a moratorium by about fifty members of the EU AI Champions Initiative, their effective application will only begin in August 2026 for new models and in August 2027 for existing models. This gradual approach, led by the Commission's AI Office, aims to give companies time to adapt while affirming the credibility of the European approach.
Although non-binding, this code resembles a pre-compliance tool: providers who adhere to it will benefit from reduced administrative burdens and increased legal security compared to those who prove their compliance by other means. However, its voluntary adoption will inevitably raise questions about the actual level of adherence by GPAI providers, especially non-European ones, to a framework that, while cooperative, introduces additional complexity into the chain of responsibility.
Henna Virkkunen, Executive Vice-President for Technological Sovereignty, Security, and Democracy, comments:
"Today's publication of the final version of the code of good practices for general-purpose AI marks an important step in making the most advanced AI models available in Europe, not only innovative but also safe and transparent. Co-designed by AI stakeholders, the code is aligned with their needs. Therefore, I invite all providers of general-purpose AI models to adhere to the code. This will provide them with a clear and collaborative path to comply with EU AI legislation".