- AiNews.com
- Posts
- California’s AI Regulation Bill SB1047 Awaits Gov. Newsom’s Decision
California’s AI Regulation Bill SB1047 Awaits Gov. Newsom’s Decision
Image Source: ChatGPT-4o
California’s AI Regulation Bill SB1047 Awaits Gov. Newsom’s Decision
California lawmakers have approved a landmark bill, Senate Bill 1047, which requires companies developing or modifying powerful AI systems to test their models for potential societal harm. The bill now awaits the decision of Governor Gavin Newsom, who will determine whether it becomes law.
Key Provisions of Senate Bill 1047
Senate Bill 1047 mandates that companies spending $100 million to train an AI model or $10 million to modify one must conduct safety testing. These tests are intended to assess the AI’s potential to cause significant harm, such as enabling cybersecurity attacks, infrastructure sabotage, or the development of chemical, biological, radioactive, or nuclear weapons.
Legislative Voting Record
Following a decisive 32-1 vote in the Senate in May, the California State Assembly voted 48-15 to pass Senate Bill 1047 late Wednesday afternoon. The bill then returned to the Senate, where Thursday morning it received final approval with concurrence on the amendments. The strong majority support in both chambers underscores the significance of the legislation as it now heads to Governor Gavin Newsom’s desk for consideration.
Controversy and Legislative Journey
The bill passed with strong support in both the Senate and Assembly, despite facing significant opposition from tech giants like Google, Meta, OpenAI, and others. These companies argue that the costs of compliance could stifle innovation, business growth, and job creation, particularly for startups, and discourage the release of open-source AI tools due to fears of legal liability.
Supporters of the bill, including former OpenAI employees, Elon Musk, and AI researcher Yoshua Bengio, argue that the risks posed by AI technologies are too significant to ignore. They believe that proactive regulation is essential to prevent potential disasters and ensure that AI development is aligned with public safety.
Over the past year, major AI companies, including those based in California, have entered into voluntary agreements with the White House and government leaders in Germany, South Korea, and the United Kingdom to test their AI models for potentially dangerous capabilities. These agreements reflect a growing international concern about the risks posed by advanced AI technologies.
In response to OpenAI's opposition to SB 1047, Senator Scott Wiener dismissed the idea that the bill would drive businesses out of California, calling it a “tired” argument. He pointed out that similar predictions were made when California passed net neutrality and data privacy laws in 2018, yet those fears never materialized. Wiener argued that the state’s leadership in tech regulation has not hindered its innovation economy.
Daniel Kokotajlo, a former OpenAI employee and whistleblower, echoed this sentiment, suggesting that SB 1047 could actually demonstrate how innovation and regulation can coexist. He predicted that, despite concerns, the pace of AI progress in California will likely accelerate if the bill becomes law, surprising many who feared it would stifle development.
Critics of the bill, including OpenAI, have argued that AI safety should be regulated at the federal level rather than by individual states. Wiener acknowledged this perspective, stating that he would have preferred Congress to take the lead on AI regulation. However, he criticized Congress for its inaction, noting that it has been largely paralyzed on tech regulation issues, with the exception of moves like the TikTok ban.
SB 1047 underwent several rounds of amendments during its legislative journey. One significant amendment removed the proposed Frontier Model Division, which was initially intended to oversee the most advanced and powerful AI systems. Other amendments included clarifications on the scope of safety testing required and adjustments to the financial thresholds that determine which companies must comply with the law. Despite these changes, the core intent of the bill—to ensure AI models are tested for their potential to cause societal harm—remained intact.
Governor Newsom’s Decision
The bill now sits on Governor Gavin Newsom’s desk. While Newsom has acknowledged the need for AI regulation, he has also cautioned against overregulation, particularly in a state that is home to many of the world’s leading AI companies. Eight members of Congress from California have urged Newsom to veto the bill, citing concerns about its impact on the industry.
Broader Implications for AI Regulation
Senate Bill 1047 is part of a broader movement in California to address the challenges posed by AI. In addition to this bill, the California Legislature has moved forward with other AI-related legislation, including laws to combat deepfakes related to elections, protect against automated discrimination, and ensure the safe use of AI in schools.
These efforts reflect California’s proactive stance on AI regulation, positioning the state as a leader in addressing the ethical and societal implications of this rapidly advancing technology.
Next Steps for AI Regulation in California
As the bill awaits the governor’s signature, the California Government Operations Agency is preparing to release a report on how AI could harm vulnerable communities. This report, part of a broader generative AI executive order, will provide further insights into the potential risks and benefits of AI technologies in society.