- AiNews.com
- Posts
- EU's AI Act Enters Into Force, Impacting U.S. Tech Giants
EU's AI Act Enters Into Force, Impacting U.S. Tech Giants
EU's AI Act Enters Into Force, Impacting U.S. Tech Giants
The European Union's AI Act, a landmark rule governing the development, use, and application of artificial intelligence, officially comes into effect today. This groundbreaking legislation aims to address the negative impacts of AI and introduces significant changes for major U.S. technology companies.
Overview of the AI Act
First proposed in 2020 by the European Commission, the AI Act establishes a comprehensive regulatory framework for AI across the EU. It primarily targets large U.S. technology firms, which are leading developers of advanced AI systems, but also impacts other businesses using AI in certain capacities.
Risk-Based Approach to AI Regulation
The AI Act employs a risk-based approach, regulating AI applications differently based on their societal risk levels. High-risk AI systems, such as autonomous vehicles, medical devices, loan decisioning systems, educational scoring, and remote biometric identification systems, face strict obligations. These include risk assessment and mitigation, high-quality training datasets to minimize the risk of bias, routine activity logging, and mandatory documentation sharing with authorities.
Ban on Unacceptable AI Applications
The law also bans AI applications deemed unacceptable, including social scoring systems that rank citizens based on analysis of their data, predictive policing, and emotional recognition technology in workplaces or schools.
Implications for U.S. Tech Giants
U.S. companies like Microsoft, Google, Amazon, Apple, and Meta are heavily involved in AI development and will be significantly impacted by the new rules. Cloud platforms like Microsoft Azure, Amazon Web Services, and Google Cloud, essential for AI training and operation, will also face increased scrutiny.
Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software firm Appian, noted that the AI Act's implications extend beyond the EU, affecting any organization with operations or impact within the region.
Meta's Regulatory Concerns
Meta has already restricted the availability of its AI model in Europe due to regulatory concerns. Earlier this month, Meta announced it would not make its LLaMa models available in the EU, citing compliance issues with the EU’s General Data Protection Regulation (GDPR).
Global Influence of the AI Act
Eric Loeb, executive vice president of government affairs at Salesforce, suggested that other governments should look to the AI Act as a blueprint for their AI policies. The EU's risk-based regulatory framework aims to balance innovation with safety, and other countries are encouraged to adopt similar approaches.
General-Purpose AI and Open-Source Models
Generative AI, labeled as "general-purpose" AI in the Act, includes models like OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude. These systems must comply with strict requirements such as adhering to EU copyright law, transparency disclosures on how the models are trained, and cybersecurity protections. Open-source models like Meta’s LLaMa and Stability AI’s Stable Diffusion have some exceptions but must make parameters publicly available and ensure they do not pose systemic risks.
Penalties for Non-Compliance
Companies that violate the AI Act could face fines of up to €35 million ($41 million) or 7% of global annual revenues, whichever is higher. This exceeds the penalties under the GDPR, which can reach €20 million or 4% of annual global turnover.
Future Implementation
Although the AI Act is now in force, most provisions won’t take effect until at least 2026. Restrictions on general-purpose systems will begin 12 months after the Act's entry into force, and existing generative AI systems have a 36-month transition period for compliance.