• AiNews.com
  • Posts
  • EU AI Act: Draft Code of Practice Outlines Compliance Path for Big AI

EU AI Act: Draft Code of Practice Outlines Compliance Path for Big AI

An illustration symbolizing the European Union's AI regulations. The EU flag with a circle of golden stars is prominently featured, surrounded by interconnected icons representing artificial intelligence, such as neural networks, data clouds, and robotic arms. The background transitions from blue at the top to white at the bottom, evoking a sense of technological progress. Additional elements include compliance symbols like checkmarks and scales of justice, integrated subtly to emphasize legal governance. The modern, clean design reflects the professionalism and significance of the EU AI Act.

Image Source: ChatGPT-4o

EU AI Act: Draft Code of Practice Outlines Compliance Path for Big AI

The European Union has unveiled a draft Code of Practice for providers of general-purpose AI (GPAI) models, offering early guidance on compliance with its groundbreaking AI Act. Released alongside a call for public feedback, the draft outlines principles and objectives but leaves many specifics unresolved. Feedback is open until November 28, 2024, with the final version expected by May 1, 2025.

The AI Act, which became law earlier this year, establishes a risk-based regulatory framework for artificial intelligence. While it governs a range of AI applications, the draft Code focuses on foundational, high-impact AI systems like OpenAI’s GPT models, Google’s Gemini, France's Mistral, and Meta’s Llama. Providers must meet these guidelines to demonstrate compliance and avoid penalties.

Key Provisions and Compliance Deadlines

The draft Code, currently a 36-page document, outlines measures GPAI makers should adopt, though it allows for alternative compliance strategies if justified. Key deadlines include:

  • August 1, 2025: Transparency requirements for GPAI providers take effect.

  • August 1, 2027: Risk assessment and mitigation rules apply to systems deemed to carry “systemic risk.”

Systemic Risks and Mitigation

The draft anticipates a “small number” of systemic risk GPAIs, defined as models with total computing power exceeding 1025 FLOPs. If this assumption changes, the Code may adopt a tiered framework focusing on higher-risk systems.

Addressing Systemic Risks

GPAI providers must address a range of systemic risks, including:

  • Offensive cybersecurity risks (e.g., vulnerability discovery).

  • Chemical, biological, radiological, and nuclear threats.

  • “Loss of control,” referring to the inability to manage a powerful autonomous general-purpose AI.

  • Large-scale disinformation and manipulation that could undermine democratic processes or erode trust in media.

  • Privacy infringements and surveillance risks.

  • Risks to public health from harmful AI applications.

  • Automated use of models for AI research and development (R&D) without oversight.

  • Deepfake content, including AI-generated child sexual abuse material and non-consensual intimate imagery.

The draft also calls for providers to use "best-in-class evaluations" to identify and mitigate these risks, such as red-teaming, human testing, benchmarks, and simulations.

Transparency and Copyright Measures

The draft specifies transparency obligations for GPAI providers, including:

  • Disclosure of web crawler names and robots.txt features used during data collection.

  • Documentation of data sources for training, testing, and validation, including evidence of authorization to use copyrighted material.

Providers must also establish a single point of contact for handling copyright grievances, enabling rapid communication with rights holders. These measures address ongoing legal disputes over the use of copyrighted data in AI training.

Forecasting and Incident Reporting

The draft Code introduces measures for forecasting risks and handling incidents to enhance accountability for GPAI providers. A key proposal involves providers committing to "best effort estimates" of when their models might acquire attributes that could trigger systemic risks, such as:

  • Dangerous model capabilities, like cyber-offensive tools or weapon acquisition.

  • Problematic tendencies, such as deception, bias, or unreliability.

By 2027, leading AI developers could be required to publish timelines for anticipated risk thresholds, enabling earlier interventions to mitigate potential harms.

Additionally, the draft emphasizes the importance of robust incident reporting. Providers must:

  • Track serious incidents arising from their GPAIs.

  • Report details, including corrective actions, to the AI Office and relevant national authorities without undue delay.

According to the draft, “Signatories commit to identify and keep track of serious incidents, as far as they originate from their general-purpose AI models with systemic risk, document and report, without undue delay, any relevant information and possible corrective measures to the AI Office and, as appropriate, to national competent authorities.”

The drafters acknowledge that "serious incident" definitions remain a work in progress, inviting feedback on what qualifies as an incident and appropriate responses. Questions also explore how to address incidents linked to open-source models or openly available model weights.

Open Source and Proportionality

The draft Code recognizes the unique challenges and opportunities associated with open-source AI development. Key considerations include:

  • Tailored Requirements: The Code emphasizes proportionality, aiming to avoid overburdening small and medium-sized enterprises (SMEs) and startups. Measures should consider their financial resources and capacity, compared to well-funded organizations leading AI development.

  • Diverse Distribution Models: Open-source models, which often rely on community-driven innovation, may require flexible compliance approaches. For instance, the draft poses questions about the application of serious incident response processes to providers of open-weight models.

Balancing Risks and Benefits:

  • Benefits: Transparency, collaboration, and fostering innovation.

  • Risks: Potential misuse and malicious applications.

The authors stress that measures should be “proportionate, with a particular focus on tailoring to the size and capacity of a specific provider, particularly SMEs and start-ups with less financial resources than those at the frontier of AI development.”

Next Steps in the Drafting Process

The draft Code draws on input from over 430 stakeholder submissions, international guidelines, and AI safety frameworks like the G7 Code of Conduct, Frontier AI Safety Commitments, and the Bletchley Declaration. The drafting groups emphasize its provisional nature, inviting feedback to refine the Code into a more granular final version.

“The suggestions in the draft Code are provisional and subject to change,” the authors note. “We invite your constructive input as we further develop and update the contents of the Code and work towards a more granular final form for May 1, 2025.”

Looking Ahead

The draft Code of Practice represents an important step toward operationalizing the EU AI Act, signaling increased accountability for major AI developers. However, significant gaps in clarity and implementation remain, particularly around systemic risks and open-source AI. The feedback process will play a crucial role in shaping the final version.

Ultimately, how stakeholders respond to this draft will shape the future of AI governance in the EU, impacting not only developers but the broader global AI landscape.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.