Blog
The AI Act has had a phased entry into force since 2 August of last year, as we explained in an earlier blog post. Previously, the obligations to phase out prohibited AI systems and to ensure AI literacy already came into effect. As of 2 August 2025, the obligations under the AI Act for providers of general-purpose AI models also apply. In this blog, we outline these obligations.
What are general-purpose AI models?
These are models capable of performing a broad range of different tasks and suitable for a variety of applications. For example, AI models that can generate text or summarize documents without having been specifically developed for a particular sector or use case. Examples include the AI models underlying AI systems such as ChatGPT, Claude, and Google Gemini.
What are the obligations for providers of such models?
Providers of general-purpose AI models must, among other things, prepare technical documentation, make information available about the capabilities and limitations of the AI model, and be transparent about the training data used. Additional obligations apply to AI models with a “systemic risk”, meaning AI models whose capabilities have a major impact, for example on public safety. The core of these additional obligations is to identify, mitigate, and monitor systemic risks.
An AI model can be integrated into an AI system, which also consists of, among other things, software and interfaces that enable the application of the model in a specific context. General-purpose AI systems – like other AI systems – are subject to the general obligations under the AI Act, such as transparency requirements and, where applicable, the performance of a conformity assessment.
The European AI Office has published a Code of Practice for general-purpose AI models: the General-Purpose AI Code of Practice. The Code of Practice is designed to help providers comply with the obligations under the AI Act. Among others, OpenAI, Microsoft, and Google have signed the Code of Practice.
What should organizations that procure or use AI systems consider?
As of 2 August 2025, the obligations described above for providers of general-purpose AI models apply. At the same time, organizations are increasingly using AI systems such as ChatGPT or Copilot.
Before procuring or deploying an AI system within the organization, it is advisable to assess which AI system best matches the organization’s objectives and risk profile. Key considerations include any limitations of the AI system that may make it unsuitable or risky for the intended application (such as hallucinations) and the level of transparency regarding the AI system’s functioning.
Once the right AI system has been selected, thorough contracting with the AI system provider is essential. Clear agreements should be made on the purpose, capabilities, and intended use of the AI system, the security requirements (such as ISO certification), the availability of information about the AI system, and the service levels, including uptime and response times for resolving issues.
Organizations of all sizes should also regulate the use of AI internally. This can be achieved through the adoption of an AI Code of Conduct: a policy document setting out basic knowledge about how AI works, supplemented with clear guidelines on the (permitted) use of AI within the organization. An AI Code of Conduct thus contributes to both the responsible use of AI and the promotion of AI literacy within the organization – an obligation under the AI Act.
Do you have questions about implementing the AI Act, contracting with an AI system provider, or drafting an AI Code of Conduct? HVG Law is happy to assist. Please feel free to contact us to discuss how we can support your organization.