- EU AI Act enforcement begins: Companies must comply with new AI restrictions, including bans on high-risk applications like real-time facial recognition and social scoring.
- Strict penalties introduced: Violators face fines of up to 35 million euros ($35.8 million) or 7% of global revenue, exceeding GDPR penalties.
- Mixed industry reactions: Some worry the law may hinder innovation, while others see it as a step toward ethical and trustworthy AI development.
The European Union has officially begun enforcing its groundbreaking AI regulation, the EU AI Act, marking a significant shift in global oversight of artificial intelligence. The law, which took effect in August 2024, introduces stringent restrictions on certain AI applications and imposes hefty fines for non-compliance. As of Sunday, companies operating within the EU must ensure adherence to the new rules, or they risk penalties of up to 35 million euros ($35.8 million) or 7% of their global annual revenue, whichever is higher.
The AI Act classifies AI systems based on risk levels and outright bans those deemed to pose an “unacceptable risk” to citizens. Prohibited applications include real-time facial recognition, social scoring systems, and AI tools that manipulate human behavior or categorize individuals by sensitive attributes such as race or sexual orientation. In addition to these bans, the law mandates that companies provide AI literacy training for staff, ensuring a deeper understanding of the technology’s risks and limitations.
While enforcement has now begun, the AI Act is not yet in full effect. The rollout will occur in phases, with additional provisions, such as transparency and compliance obligations for general-purpose AI models, coming into force over the next few years. To oversee implementation, the EU AI Office has been established, and it recently released a second draft of its code of practice for general-purpose AI models. This draft includes exemptions for certain open-source AI developers while requiring systemic AI providers to undergo detailed risk assessments.
The legislation has sparked mixed reactions from industry leaders. While some view it as a necessary step to ensure ethical AI development, others express concerns that strict regulations could stifle innovation. Critics argue that Europe’s regulatory focus may put it at a competitive disadvantage compared to the U.S. and China, where AI development continues with fewer restrictions. However, proponents believe that the AI Act’s emphasis on bias detection, risk assessments, and human oversight will position Europe as a leader in responsible AI deployment.
As the enforcement phase progresses, companies will need to navigate evolving compliance requirements and adapt to the EU’s vision for safe and transparent AI use. With this law setting a potential global precedent, other regions may look to the EU as a model for regulating AI while balancing innovation and consumer protection. The coming years will determine whether this regulatory approach fosters a more trustworthy AI ecosystem or creates barriers that slow technological advancement.