California Gov. Gavin Newsom, a Democrat, on Sunday vetoed a bill to create safety measures for large artificial intelligence models, which would have been the first such law in the nation.
The governor's veto delivers a major setback to attempts to create guardrails around AI and its rapid evolution with little oversight, according to The Associated Press. The legislation faced staunch opposition from startups, tech giants and several Democratic lawmakers.
Newsom said earlier this month at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI as the federal government has failed to put safety measures in place, but that the proposal "can have a chilling effect on the industry."
S.B. 1047, the governor said, could have hurt the homegrown industry by setting up strict requirements.
DEMOCRAT SENATOR TARGETED BY DEEPFAKE IMPERSONATOR OF UKRAINIAN OFFICIAL ON ZOOM CALL: REPORTS
"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."
Newsom announced instead that the state will partner with several industry experts to develop safety measures for powerful AI models.
S.B. 1047 would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated for harmful purposes, such as, for example, wiping out the state's electric grid or helping to build chemical weapons, scenarios that experts say could be possible in the future as the industry continues to rapidly evolve.
The legislation also would have provided whistleblower protection to industry workers.
Democratic state Sen. Scott Weiner, who authored the bill, said the veto was "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet."
"The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing," he said in a statement. "While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public."
Wiener said the debate around the bill has helped put a spotlight on the issue of AI safety, and that he would continue pushing to advance safety measures around the technology.
Tech billionaire Elon Musk supported the measure.
800-PLUS BILLS LEFT ON NEWSOM'S DESK ILLUSTRATE CALIFORNIA'S OVERREGULATION PROBLEM: EXPERTS
The proposal is one of several bills passed by the state Legislature this year seeking to regulate AI, combat deepfakes and protect workers. State lawmakers said California must take actions this year, pointing to the results of failing to rein in social media companies when they might have had an opportunity.
Supporters of the bill said it could have presented some transparency and accountability around large-scale AI models, as developers and experts say they still do not have a full understanding of how AI models behave.
The bill sought to address systems that require a high level of computing power and more than $100 million to build. No current AI models have met that criteria, but some experts say that could change within the next year.
"This is because of the massive investment scale-up within the industry," Daniel Kokotajlo, a former OpenAI researcher who stepped down earlier this year over what he described as the company's disregard for AI risks, told The Associated Press. "This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky."
The U.S. is behind Europe in regulating the growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters of the measure said. The California bill was not as comprehensive as regulations in Europe, but the supporters say it would have been a step in the right direction.
Last year, several leading AI companies voluntarily agreed to follow safeguards set by the White House, which include testing and sharing information about their models. The California bill, according to its supporters, would have required AI developers to follow requirements similar to those safeguards.
But critics of the measure argued that it would harm tech and stifle innovation in the Golden State. The proposal would have discouraged AI developers from investing in large models or sharing open-source software, according to the critics, which include U.S. Rep. Nancy Pelosi, D-Calif.
Two other AI proposals, which also faced opposition from the tech industry, did not pass ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and prohibit discrimination by AI tools used to make employment decisions.
California lawmakers are still considering new rules against AI discrimination in hiring practices.
The governor previously said he wanted to protect the state's status as a global leader in AI, citing that 32 of the world's top 50 AI companies are in the Golden State.
Newsom has said California is an early adopter of AI, as the state could deploy generative AI tools in the near future to combat highway congestion, provide tax guidance and streamline homelessness programs.
Earlier this month, Newsom signed some of the strictest laws in the country to fight against election deepfakes and adopt measures to protect Hollywood workers from unauthorized AI use.
The Associated Press contributed to this report.