- AiNews.com
- Posts
- Commerce Department Backs ‘Open’ AI Models, Calls for Risk Monitoring
Commerce Department Backs ‘Open’ AI Models, Calls for Risk Monitoring
Commerce Department Backs ‘Open’ AI Models, Calls for Risk Monitoring
On Monday, the U.S. Commerce Department released a report supporting “open-weight” generative AI models like Meta’s Llama 3.1. However, it recommended that the government develop new capabilities to monitor these models for potential risks.
The report, authored by the National Telecommunications and Information Administration (NTIA), emphasized that open-weight models make generative AI accessible to small companies, researchers, nonprofits, and individual developers. It suggests that the government should not impose restrictions on access to open models without first investigating the potential market harm such restrictions might cause.
This view aligns with recent remarks by FTC Commission chair Lina Khan, who supports open models as a way to foster competition by enabling more small players to bring their ideas to market.
“The openness of the largest and most powerful AI systems will affect competition, innovation, and risks in these revolutionary tools,” said Alan Davidson, assistant secretary of Commerce for Communications and Information and NTIA administrator. “NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”
The release of the report comes as regulators in the U.S. and abroad consider rules that could impose new requirements on companies releasing open-weight models.
Regulatory Context
In California, bill SB 1047 is nearing passage. It mandates that any company training a model using more than 1026 FLOP of compute power must enhance cybersecurity and develop a way to “shut down” copies of the model within its control. Meanwhile, the EU has finalized compliance deadlines for companies under its AI Act, which introduces new rules concerning copyright, transparency, and AI applications.
Meta has expressed concerns that the EU’s AI policies may prevent the release of some open models in the future. Similarly, several startups and large tech companies have criticized California’s proposed law as being overly burdensome.
Governance and Monitoring
The NTIA’s governance philosophy for AI models is not entirely hands-off. The report calls for the government to create a program to continually gather evidence on the risks and benefits of open models, evaluate that evidence, and take action as necessary, including imposing certain restrictions on model availability if warranted. Specifically, it proposes researching the safety of various AI models, supporting risk mitigation research, and developing risk-specific indicators to signal when policy changes might be needed.
These steps are intended to align with President Joe Biden’s executive order on AI, which urges government agencies and companies to set new standards for the creation, deployment, and use of AI.
“The Biden-Harris Administration is pulling every lever to maximize the promise of AI while minimizing its risks,” said Gina Raimondo, U.S. Secretary of Commerce, in a press release. “Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead.”