Key Takeaways
- Update: On September 29, 2024, CA Gov. Gavin Newsom announced his veto of SB 1047, dubbed the AI Safety Bill, ending a month of lobbying and speculation. More on his statement and industry reactions below.
- SB 1047 Overview: California’s SB 1047 requires developers of the most powerful AI models to comply with strict safety checks before building them.
- Controversy: The bill regulates the technology itself, sparking debate over its approach and enforcement.
September 29th, 2024 Update: California Governor Halts Contentious AI Safety Bill
On September 29th, 2024, California Governor Gavin Newsom announced his veto of SB 1047, commonly known as the AI Safety Bill, after months of intense lobbying by supporters and opponents.
Newsom released a statement praising the bill’s ambitions and agreeing with the need to protect against catastrophic risks and hold companies accountable.
He laid out two key objections to its framework:
- The bill targeted models exclusively based on their cost and computational resources employed, whether or not a model is “deployed in high-risk environments, involves critical decision-making or the use of sensitive data”;
- It excluded smaller models that “may emerge as equally or even more dangerous than the models targeted” depending on their uses, risks, or impacts.
Even so, Newsom has maintained that California and other states have a valid role to play in regulating “specific, known risks posed by AI” and to do so before harm occurs.
Reactions to SB 1047 Veto
Democratic State Senator Scott Wiener, the bill’s author who shepherded it through the legislature and had been involved in compromises to weaken certain provisions to win more industry support, expressed disappointment that “companies aiming to create an extremely powerful technology face no binding restrictions.”
As we noted in our previous overview of the bill (below), the vagueness of the bill’s language, including definitions for terms such as “unreasonable risk” and what might qualify to “cause or materially enable a critical harm,” had received criticism from many of the developers of the most popular generative AI models, including Google, Microsoft, and OpenAI.
Opposition also came from those who seek to boost open-source development efforts in the AI space, which would have been severely hindered by the bill’s compliance requirements.
In the short term, Newsom’s veto removes a potential hurdle to continued rapid innovation by the largest companies and models targeted by the bill.
Smaller organizations hoping the new regulations on bigger companies might help them catch up will be disappointed.
What Happens Next?
The concern about misuse of AI and existential risk isn’t going away.
The bill’s sponsors have already said they will try again in California, and Newsom’s statement clearly outlines a path for future regulation that is more focused on particular risks as opposed to size or compute.
It’s less likely that other states will propose similarly sweeping legislation. California is uniquely positioned to lead on technology regulation as the home of Silicon Valley, the largest state market in the nation, with unified Democratic control of the legislature and governor’s office.
California and other states have passed or are considering more targeted legislation, and we should expect that trend to continue.
What Do Enterprise Executives and Business Leaders Need to Do?
Short Term
Monitor Other Emerging Regulations: While SB 1047 was vetoed, California and other states have recently passed additional AI laws:
- California AB-2013: GenAI Training Data Transparency – Sets standards for transparency around training data.
- California SB-942: AI Transparency Act – Requires AI developers to build free AI detection tools and ensure their outputs are labeled.
- Utah SB-149: AI Policy Act – Requires disclosure of GenAI to consumers.
- Tennessee: ELVIS Act – Protects individuals from unauthorized AI deepfakes of their voice or likeness.
Ensure compliance with these new standards, particularly around transparency and AI labeling requirements.
Looking for a full list of AI regulations and standards? Visit our AI Guides for the latest legislation and requirements.
Long Term
Build a Strong AI Governance Framework: Prepare for future regulations by establishing a detailed AI governance framework that includes transparency, accountability, and risk management, ensuring readiness for more specific AI laws that may emerge.
__________________________
What Is SB 1047 and Why Is Everyone Talking About It?
California SB 1047, formally titled The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires developers of the largest and most powerful AI models to comply with various safety checks before building such models.
Since its inception, the bill has been controversial, receiving comments, support, and criticism from many angles.
Influential voices in the AI community have sparked intense debate over the bill.
Geoffrey Hinton, a Turing Award winner and ‘Godfather of AI,’ and Yoshua Bengio, a deep learning pioneer and professor at the University of Montreal, are largely in favor of it, while Yann LeCun, Chief AI Scientist at Meta, and Andrew Ng, co-founder of Coursera and Adjunct Professor at Stanford University, are strongly opposed.
It passed the California legislature on August 28th, 2024. On September 29, 2024, California Governor Gavin Newsom announced his veto of SB 1047.
What Does California SB 1047 Mandate?
The bill targets developers of the largest AI models, defined by thresholds on training compute power and cost. It requires these developers to meet several safety criteria before training their models. Among these requirements, developers must implement quick-acting kill switches and enforce safety and security protocols focused on detecting and preventing catastrophic risks.
Additionally, developers are prohibited from using or providing a covered model if there is a risk that it could cause or enable critical harm—defined as a risk of mass casualties, attacks on infrastructure, or similarly catastrophic threats to public safety or security.
Developers must retain a third party to conduct compliance audits against these requirements. They must also share the unredacted audit report with the California Attorney General upon request. Developers must submit a statement of compliance and report all safety issues to the Attorney General.
Why Is SB 1047 Controversial?
SB 1047 is controversial for several reasons.
Critics argue that the bill attempts to regulate foundation models rather than focusing on AI at the application level, which they believe would stifle innovation. Yann LeCun, Meta’s Chief AI Scientist, tweeted that holding developers accountable for their customers’ applications “will simply stop technology development.” Another renowned AI scientist, Andrew Ng, used an analogy to make this point: electric motors are a technology, but we mostly worry about the dangerous applications of motors—guided missiles, less so microwaves—so it doesn’t make sense to hold motor builders accountable.
Some critics argue that the bill’s enforcement mechanisms are too weak compared to the catastrophic harms it seeks to mitigate. Previous versions of the bill included criminal charges, which have since been downgraded to civil penalties.
Other critics, including OpenAI, argue that this type of AI regulation should be left to the federal government to avoid a patchwork of inconsistent state-level requirements. However, state governments are stepping up on AI regulation, given the federal government’s slow pace in taking action.
On the other side of the debate, proponents of the bill, including Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell, argue that the next generation of AI systems poses “severe risks” if “developed without sufficient care and oversight.” They describe the bill as the “bare minimum for effective regulation of this technology.”
What Is the Status of California SB 1047?
Update: While SB 1047 passed the California State Legislature on August 28, 2024, Governor Newsom announced on September 29, 2024, that he vetoed the ‘well-intentioned’ bill, stating that it was not necessarily ‘the best approach to protecting the public from real threats posed by the technology.
Keep Learning