Saturday, June 21, 2025

California Takes Aim at AI Giants, Again

Share

Introduction to AI Regulation

Last September, all eyes were on Senate Bill 1047 as it made its way to California Governor Gavin Newsom’s desk — and died there as he vetoed the buzzy piece of legislation. SB 1047 would have required makers of all large AI models, particularly those that cost $100 million or more to train, to test them for specific dangers. AI industry whistleblowers weren’t happy about the veto, and most large tech companies were. But the story didn’t end there. Newsom, who had felt the legislation was too stringent and one-size-fits-all, tasked a group of leading AI researchers to help propose an alternative plan — one that would support the development and the governance of generative AI in California, along with guardrails for its risks.

The New Report

On Tuesday, that report was published. The authors of the 52-page “California Report on Frontier Policy” said that AI capabilities — including models’ chain-of-thought “reasoning” abilities — have “rapidly improved” since Newsom’s decision to veto SB 1047. Using historical case studies, empirical research, modeling, and simulations, they suggested a new framework that would require more transparency and independent scrutiny of AI models. Their report is appearing against the backdrop of a possible 10-year moratorium on states regulating AI, backed by a Republican Congress and companies like OpenAI.

Impact of AI on Various Industries

The report — co-led by Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society — concluded that frontier AI breakthroughs in California could heavily impact agriculture, biotechnology, clean tech, education, finance, medicine and transportation. Its authors agreed it’s essential to not stifle innovation and “ensure regulatory burdens are such that organizations have the resources to comply.”

The Need for Safeguards

But reducing risks is still paramount, they wrote: “Without proper safeguards… powerful AI could induce severe and, in some cases, potentially irreversible harms.” The group published a draft version of their report in March for public comment. But even since then, they wrote in the final version, evidence that these models contribute to “chemical, biological, radiological, and nuclear (CBRN) weapons risks… has grown.” Leading companies, they added, have self-reported concerning spikes in their models’ capabilities in those areas.

Changes to the Draft Report

The authors have made several changes to the draft report. They now note that California’s new AI policy will need to navigate quickly-changing “geopolitical realities.” They added more context about the risks that large AI models pose, and they took a harder line on categorizing companies for regulation, saying a focus purely on how much compute their training required was not the best approach. AI’s training needs are changing all the time, the authors wrote, and a compute-based definition ignores how these models are adopted in real-world use cases.

The Importance of Transparency

The report calls for whistleblower protections, third-party evaluations with safe harbor for researchers conducting those evaluations, and sharing information directly with the public, to enable transparency that goes beyond what current leading AI companies choose to disclose. One of the report’s lead writers, Scott Singer, told The Verge that AI policy conversations have “completely shifted on the federal level” since the draft report. He argued that California, however, could help lead a “harmonization effort” among states for “commonsense policies that many people across the country support.”

Third-Party Risk Assessment

The authors concluded that risk assessments would incentivize companies like OpenAI, Anthropic, Google, Microsoft and others to amp up model safety, while helping paint a clearer picture of their models’ risks. Currently, leading AI companies typically do their own evaluations or hire second-party contractors to do so. But third-party evaluation is vital, the authors say. Not only are “thousands of individuals… willing to engage in risk evaluation, dwarfing the scale of internal or contracted teams,” but also, groups of third-party evaluators have “unmatched diversity, especially when developers primarily reflect certain demographics and geographies that are often very different from those most adversely impacted by AI.”

Challenges in Implementing Third-Party Evaluations

But if you’re allowing third-party evaluators to test the risks and blind spots of your powerful AI models, you have to give them access — for meaningful assessments, a lot of access. And that’s something companies are hesitant to do. It’s not even easy for second-party evaluators to get that level of access. Metr, a company OpenAI partners with for safety tests of its own models, wrote in a blog post that the firm wasn’t given as much time to test OpenAI’s o3 model as it had been with past models, and that OpenAI didn’t give it enough access to data or the models’ internal reasoning.

Conclusion

In conclusion, the report highlights the need for a balanced approach to AI regulation, one that promotes innovation while minimizing risks. The authors emphasize the importance of transparency, third-party risk assessment, and whistleblower protections in ensuring the safe development and deployment of AI models. As AI continues to evolve and impact various industries, it is crucial that policymakers, industry leaders, and researchers work together to establish effective guardrails and safeguards. By doing so, we can harness the potential of AI to drive positive change while mitigating its potential harms.

Latest News

Related News