August 1st marks a significant milestone with the implementation of the European Artificial Intelligence Act (AI Act), the world’s first comprehensive regulation on artificial intelligence (AI). This landmark legislation ensures that AI developed and used in the European Union (EU) is trustworthy, safeguarding fundamental rights. The AI Act aims to establish a unified internal market for AI within the EU, promoting technological adoption, innovation, and investment.
The AI Act introduces a modern definition of AI based on a product safety and risk-based approach:
- Minimal Risk: This category includes AI systems such as recommender engines and spam filters. These systems are deemed to pose minimal risk to citizens’ rights and safety and therefore face no obligations under the AI Act. Companies can choose to voluntarily adopt additional codes of conduct to enhance their transparency and trustworthiness.
- Specific Transparency Risk: AI systems like chatbots must clearly disclose to users that they are interacting with artificial intelligence. AI-generated content, including deep fakes, needs to be appropriately labeled. Users should be informed when systems employ biometric categorization or emotion recognition. Additionally, providers must ensure that synthetic audio, video, text, and image content is marked in a machine-readable format to indicate it is artificially generated or manipulated.
- High Risk: High-risk AI systems must meet stringent requirements, including risk mitigation, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight, and robust cybersecurity measures. Regulatory sandboxes will support responsible innovation and compliant AI system development. Examples of high-risk AI systems include those used for recruitment, loan eligibility assessments, or operating autonomous robots.
- Unacceptable Risk: AI systems that pose a clear threat to fundamental rights will be banned. This includes AI systems that manipulate human behavior, such as toys using voice assistance to encourage dangerous actions in minors, systems that allow ‘social scoring’ by governments or companies, and specific predictive policing applications. Some biometric system uses, like emotion recognition in workplaces, are also prohibited.
The AI Act also introduces regulations for general-purpose AI models, which are versatile AI systems designed to perform various tasks, such as generating human-like text. The legislation ensures transparency along the value chain and addresses systemic risks of these highly capable models.
Application and Enforcement of the AI Rules
EU Member States have until August 2, 2025, to designate national authorities to oversee AI system regulations and conduct market surveillance. The Commission’s AI Office will serve as the primary implementation body at the EU level, enforcing rules for general-purpose AI models.
Three advisory bodies will support the implementation of the rules:
- European Artificial Intelligence Board: Ensures consistent application of the AI Act across Member States and facilitates cooperation between the Commission and Member States.
- Scientific Panel of Independent Experts: Provides technical advice, alerts about risks associated with general-purpose AI models, and offers input on enforcement.
- Advisory Forum: Composed of diverse stakeholders, provides guidance to the AI Office.
Companies that do not comply with the regulations will face penalties. These include fines of up to 7% of their global annual revenue for using banned AI applications, up to 3% for failing to meet other obligations, and up to 1.5% for providing incorrect information.
Next Steps
Most of the AI Act’s rules will start applying on August 2, 2026. However, prohibitions on AI systems deemed to present an unacceptable risk will take effect after six months, and rules for general-purpose AI models will apply after 12 months.
To ease the transition, the Commission has introduced the AI Pact, inviting AI developers to voluntarily adopt key obligations of the AI Act ahead of the legal deadlines. Additionally, the Commission is developing guidelines for implementing the AI Act and facilitating co-regulatory instruments like standards and codes of practice. A call for participation in creating the first general-purpose AI Code of Practice and a multi-stakeholder consultation has been opened, allowing all stakeholders to contribute to the first Code of Practice under the AI Act.