- AiNews.com
- Posts
- Major AI Companies Silent on California’s New AI Transparency Law
Major AI Companies Silent on California’s New AI Transparency Law
Image Source: ChatGPT-4o
Major AI Companies Silent on California’s New AI Transparency Law
On Sunday, California Governor Gavin Newsom signed AB-2013 into law, marking a significant step towards AI transparency. This legislation requires companies developing generative AI systems to publish a high-level summary of the data used to train their systems. These summaries must detail the ownership, procurement, or licensing of the data, along with any inclusion of copyrighted or personal information.
Uncertainty in Compliance
Despite the law's clear mandate, many major AI companies have remained silent on whether they will comply. TechCrunch reached out to key industry players, including OpenAI, Microsoft, Google, Amazon, Meta, and several startups like Stability AI, Midjourney, and Runway. Responses were scarce, with only Stability, Runway, and OpenAI explicitly confirming their intent to comply.
An OpenAI spokesperson stated, “OpenAI complies with the law in jurisdictions we operate in, including this one.” Stability also indicated support, noting the company is "supportive of thoughtful regulation that protects the public while at the same time doesn’t stifle innovation.” Meanwhile, Microsoft declined to comment.
What the Law Covers and Its Timeline
Although AB-2013 applies to systems released from January 2022 onwards, the enforcement of the disclosure requirement won’t start until January 2026. The law specifically applies to AI systems made available to California residents, giving companies some flexibility regarding its reach.
Why the Silence?
A key reason for the reluctance of some companies to comment on compliance could be the nature of how generative AI systems are trained. Many systems are developed using vast amounts of data scraped from the web, which often includes copyrighted and personal materials. In the past, companies were more transparent about their training datasets, but in today's competitive AI landscape, this information is treated as proprietary and closely guarded.
The practice of using data from sources like LAION, The Pile, and even copyrighted works such as Books3 has raised legal concerns. These data sets contain materials that may infringe on copyright laws, leading to lawsuits. OpenAI, Anthropic, and Meta are among the companies facing litigation for allegedly using copyrighted books in their training datasets. Music labels and artists have also filed lawsuits against companies like Udio, Suno, and Stability AI for improper use of their creative works.
Legal Ramifications and the Fair Use Debate
Vendors face growing legal challenges over their training data, and AB-2013 could make things worse by mandating public disclosures that might reveal further infringements. Many companies argue that fair use doctrine provides legal protection, allowing them to use copyrighted material to train AI systems. Some, like Meta and Google, have even modified their platforms' terms of service to enable broader data collection for AI training purposes.
However, as legal battles continue, the scope of fair use in AI remains a contentious issue. In some cases, companies like Meta have proceeded with using copyrighted material for training despite internal legal warnings, betting on the courts siding with the argument that AI development constitutes fair use.
Potential Impacts of AB-2013
AB-2013 is broad in its application, compelling any entity that fine-tunes or retrains AI systems to also disclose their training data sources. While it includes some exemptions for systems used in defense and cybersecurity, the majority of AI systems will be affected.
If the courts don't favor fair use defenses, companies may need to rethink their approach. Some might withhold certain AI models from the California market, or develop region-specific models that rely solely on licensed or fair use data. Alternatively, if the law leads to more disclosures, it could spawn a new wave of legal challenges as companies reveal potentially infringing datasets.
Looking Ahead
As the January 2026 deadline for compliance approaches, it’s clear that AB-2013 could have a lasting impact on the AI industry. Whether companies choose to comply fully or seek legal loopholes, this law is likely to drive new practices around transparency and data usage. Moving forward, it will be crucial to see whether this transparency fosters trust in AI or stifles innovation as companies navigate both legal and competitive pressures.