With key provisions of the EU AI Act coming into force on 2 August 2025, a new era of artificial intelligence regulation begins across the European Union. This legislation marks the world’s first comprehensive, binding legal framework for the use of AI, particularly affecting general-purpose AI models (GPAI) such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Meta’s LLaMA. These versatile models are increasingly embedded in products, services and internal processes across industries.
Transparency Requirements for Large-Scale AI Models
A central pillar of the legislation is its transparency obligations for providers of GPAI models. From August onwards, companies must disclose how their models were trained, which datasets were used, and whether copyrighted content was included – such as text corpora, code repositories or journalistic content. A technical documentation outlining the model architecture and training process is also mandatory. General descriptions won’t suffice: the EU demands clear, verifiable documentation that allows authorities to assess each model in terms of safety, transparency and risk potential.
In addition, developers must demonstrate they have implemented strategies to respect intellectual property rights. This includes not only written content but also visual and audio material – even though standalone tools like DALL·E are no longer separately marketed, their multimodal capabilities continue within products like GPT-4o or Gemini 1.5 Pro.
Safety Measures for High-Risk Systems
For GPAI models that pose a so-called systemic risk – due to their scale, integration potential or societal influence – additional obligations apply. These include mandatory risk assessments, adversarial testing (which simulates possible misuse scenarios), and immediate reporting requirements for security incidents. The aim is to prevent the uncontrolled spread of unsafe behaviours, vulnerabilities or harmful outputs.
Institutional Framework: The New AI Office
Also from August 2025, the EU’s central AI Office becomes operational. It will coordinate cooperation among national supervisory authorities, issue technical guidance, and ensure consistent interpretation of the AI Act across member states. All EU countries must designate their respective competent bodies and publish clear points of contact for AI-related queries.
At the same time, confidentiality rules will be formalised. All information collected during audits, assessments or compliance procedures will be subject to strict data protection regulations, applicable to both national authorities and the European Commission.
Sanctions for Non-Compliance
From 2 August 2025, all member states must have implemented national enforcement mechanisms for penalties. Violations of the transparency, documentation or safety requirements may trigger severe fines – up to €35 million or 7% of the company’s global annual turnover, whichever is higher. These sanctions are designed to ensure that even large tech firms take compliance seriously and cannot simply absorb fines as part of operational risk.
Who Is Affected – and Who Is Not (Yet)?
The obligations primarily apply to developers and distributors of GPAI models, as well as to businesses that integrate such models into their own products or platforms. End users – such as consumers or small businesses that only use third-party AI tools – are not directly affected for now. However, they are encouraged to keep a close eye on how providers implement the new legal standards. Platform operators who incorporate third-party models will also be expected to verify their origin and compliance.
The more stringent requirements for high-risk AI systems – such as those used in biometric identification, law enforcement, critical infrastructure or employment – will not apply until August 2026, with some deadlines extending into 2027. This gives developers more time to adapt.
What Else Is Still to Come?
Not all aspects of the AI Act come into force this year. For example, labelling requirements for AI-generated content – including text, images or synthetic speech – won’t be mandatory until August 2026. Likewise, the obligation to register powerful GPAI models in a central EU database is not fully enforceable until 2026. Certain technical standards – for transparency indicators, safety testing protocols or model classification – are still being finalised in coordination with international bodies such as the OECD and G7.
Conclusion: The AI Act Reshapes the European AI Landscape
For many developers, the era of voluntary self-regulation ends in August 2025. The AI Act establishes a new legal foundation for AI in Europe, with far-reaching implications for product development, data handling and corporate accountability. While many of the finer details remain in flux, the direction is now clear: AI systems must be transparent, safe, legally sound and subject to external scrutiny. Otherwise, businesses risk not only significant fines, but also a loss of trust in an increasingly sensitive and regulated market.
The next stages of the Act are already in view. From 2026, rules for high-risk systems will take effect, covering fairness, robustness and safety in key sectors. But the groundwork begins now. Companies that still hesitate should use the remaining time to bring their structures, documentation and internal processes in line with the new legal standards. The transition period has begun – but it won’t last forever.

