- AiNews.com
- Posts
- Commerce Department Proposes Mandatory Reporting for AI Developers
Commerce Department Proposes Mandatory Reporting for AI Developers
Image Source: ChatGPT-4o
Commerce Department Proposes Mandatory Reporting for AI Developers
The U.S. Commerce Department’s Bureau of Industry and Security (BIS) has proposed new rules that would require leading artificial intelligence (AI) developers and cloud providers to submit detailed reports to the federal government. The proposed regulations aim to enhance the safety, security, and reliability of AI technology as it continues to evolve.
New AI Reporting Requirements for National Security
In a Notice of Proposed Rulemaking issued on Sept. 9, the BIS outlined the proposed reporting requirements. The goal is to ensure that AI systems can withstand cyberattacks, minimize the risk of being exploited by foreign adversaries, and prevent misuse by non-state actors. The rapid advancement of AI presents both significant opportunities and risks, according to the department.
"As AI is progressing rapidly, it holds both tremendous promise and risk," said Secretary of Commerce Gina M. Raimondo. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."
Scope of the Proposed Reporting
The proposed regulations would require AI developers to provide information about their developmental activities, cybersecurity measures, and the results of red-teaming exercises. Red-teaming refers to testing AI systems for vulnerabilities, including their ability to resist cyberattacks or facilitate the creation of chemical, biological, radiological, or nuclear weapons.
Alan F. Estevez, Under Secretary of Commerce for Industry and Security, emphasized the importance of these rules: "This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems."
Biden Administration’s Focus on AI Safety
This proposal follows an executive order issued by the Biden Administration in October 2023, which laid out guidelines for safe AI development. The order required AI developers to share safety test results and key information with the federal government. It also called for the creation of tools and standards to ensure the security and trustworthiness of AI systems.
Additionally, the order focused on mitigating the risks associated with using AI to engineer dangerous biological materials, urging companies to establish strong safeguards for biological synthesis screening.
The new BIS proposal is seen as part of a broader effort by the government to better understand emerging risks in AI and inform future legislation on the subject.