- AiNews.com
- Posts
- EU Gathers Experts to Draft AI Code of Practice for General AI Models
EU Gathers Experts to Draft AI Code of Practice for General AI Models
Image Source: ChatGPT-4o
EU Gathers Experts to Draft AI Code of Practice for General AI Models
The European Union is making significant strides in shaping the future of artificial intelligence by spearheading the creation of the first “General-Purpose AI Code of Practice” under the AI Act. Announced on September 30, this initiative is being led by the European AI Office and aims to establish a framework addressing key issues such as transparency, copyright, risk assessment, and internal governance.
Global Experts Collaborate on Framework
The EU has gathered hundreds of experts from academia, industry, and civil society for a months-long process that will culminate in the final draft of the Code of Practice by April 2025. The kick-off plenary, which had nearly 1,000 participants, marked the official start of this initiative, with the experts working collaboratively to draft a comprehensive framework for AI model regulation.
Four Working Groups Established
To streamline the development process, the EU has established four working groups, each focusing on a different aspect of AI governance. These working groups are led by prominent industry figures, including Nuria Oliver, an expert in artificial intelligence, and Alexander Peukert, a specialist in German copyright law. The groups will address key topics, such as transparency and copyright, risk identification, technical risk mitigation, and internal risk management.
The European AI Office has confirmed that these working groups will meet regularly between October 2024 and April 2025 to gather stakeholder input, draft provisions, and refine the Code of Practice through a process of ongoing consultation.
The AI Act: A Risk-Based Approach to AI Governance
The EU’s AI Act, passed in March 2024 by the European Parliament, is a landmark piece of legislation that introduces a risk-based approach to regulating artificial intelligence. The Act classifies AI systems into different risk categories, ranging from minimal to unacceptable, and mandates specific compliance measures for each category.
This framework is particularly relevant to general-purpose AI models, such as large language models (LLMs), which are frequently classified as high-risk due to their wide range of applications and potential societal impact.
Challenges and Criticisms from the AI Industry
While the AI Act has received praise for its forward-thinking approach, it has also faced criticism from some industry leaders, including Meta, who argue that the regulations may be overly restrictive and could stifle innovation. In response, the EU’s collaborative approach to drafting the Code of Practice aims to strike a balance between ensuring safety and ethics while fostering technological innovation.
Multi-Stakeholder Consultation and Global Influence
As part of this process, the EU has opened a multi-stakeholder consultation, which has already received over 430 submissions. These contributions will play a key role in shaping the final provisions of the Code of Practice.
What This Means for the Future of AI
By April 2025, the EU aims to have the Code of Practice ready, setting a precedent for the responsible development and deployment of general-purpose AI models, focusing on minimizing risks while maximizing societal benefits. This effort will likely have a far-reaching impact, influencing AI policies and regulations around the world.
As AI continues to evolve rapidly, this initiative is poised to influence the global landscape. Countries looking to regulate emerging technologies may turn to the EU’s Code of Practice as a model for balancing risk management with innovation, establishing new global standards for AI governance.