California Governor Vetoes Bill Regulating AI Development

California Governor Gavin Newsom either just spared his state the mass defections of the biggest and potentially most profitable businesses now developing artificial intelligence (AI), or fumbled the opportunity to show the nation how Big Tech might finally be regulated. Only time and hindsight will tell how the legacy of his Sunday veto of a state bill seeking to oversee AI will be viewed.


Newsom’s refusal to sign the AI-regulating SB 1047 bill that California legislators passed in August marks a rare instance of opposing the State Assembly’s efforts to enshrine progressive political positions into law. While the governor had previously backed fellow Democrats’ votes to raise the hourly minimum wage levels for fast-food workers to $20, battle junk fees, and ban employers from staging mandatory anti-union meetings for employees, yesterday he blocked lawmakers’ attempt to set guardrails for cutting-edge development of AI tech. Not surprisingly, sector giants like Amazon, Meta, Google, Microsoft, and others rejoiced.


The reason? Because the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” would have created exacting testing restrictions and legal accountability for developers of the biggest projects extending the limits of AI capabilities. As written, SB 1047 would have forced operators of gigantic, cutting-edge “Frontier Models” with over $100 million in funding to establish trial protocols, and create a kill switch for all large applications in case of emergencies. It also would have held them legally responsible for any damages, deaths, or other consequences of the tech going haywire.


In general, it sought an initial approach to oversight of AI in ways the federal government has been unable to propagate—just as it failed to regulate Big Tech during its rise to its current, seemingly unchallengeable position of domination.


Little wonder, then, that among the opponents of SB 1047 were the Chamber of Progress collective representing Big Tech’s largest companies, including AI developers Apple, Amazon, Google, and Meta. Joining them was Democratic Congresswoman Nancy Pelosi, the former Speaker of the House of Representatives—a body that initially floundered on establishing meaningful tech regulation. Congress continues to come up short with similar guardrails for AI today.


That absence of federal oversight has led several states—including Colorado, Maryland, and Illinois—to pass laws applicable to some AI uses, such as deepfake content in political ads and facial recognition apps in certain business activities. Indeed, Newsom himself signed a California “deepfake” bill into law this month, as well as several others looking to protect people from potential abuse of AI.


This time, however, Newsom decided SB 1047 would create too wide and problematic a regulatory net. He also considered it an overreach that risked driving “32 of the world’s 50 leading Al companies” from their California headquarters to states with less binding laws. But mostly, he argued, it would have hindered development at the top, while leaving it wide open to both good and evil actors at lower levels of future AI creation.


“SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data… (but) applies stringent standards to even the most basic functions–so long as a large system deploys it,” a Newsom statement explaining his veto said. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047–at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good. “


The bill’s author, Senator Scott Weiner, lamented fellow Democrat Newsom’s veto, saying it creates the same trap of inaction in the face of a pressing concern, a similar dynamic to the federal government’s inability to craft national AI regulation.


“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way,” Weiner said in a statement.


The San Francisco legislator isn’t the only one regretting the veto. Backers counted several actors including Star Wars hero Mark Hamill, the SAG-AFTRA union, and many top AI researchers and developers.


Also joining their ranks was the usually soft-spoken, controversy averse serial founder Elon Musk—who recently relocated the headquarters of his SpaceX and X social media companies out of California to protest state laws he didn’t like.


“This is a tough call and will make some people upset, but all things considered, I think California should probably pass the SB 1047 AI safety bill,” Musk said on X in August, prior to the legislation’s passage . “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”


And that’s a position of principle Musk is certain not budge from—unless, of course, the xAI startup he founded 18 months ago to develop artificial intelligence applications becomes a big enough money maker to force an about-face.