Key Takeaways

  • SB 1047 Overview: California’s SB 1047 requires developers of the most powerful AI models to comply with strict safety checks before building them.
  • Controversy: The bill regulates the technology itself, sparking debate over its approach and enforcement.
  • Current Status: The California legislature passed the bill on August 28, 2024, the bill is awaiting Governor Newsom’s decision by September 30, 2024.

What is SB 1047 and why is everyone talking about it?

California SB 1047, formally titled The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires developers of the largest and most powerful AI models to comply with various safety checks before building such models.

Since its inception, the bill has been controversial, receiving comments, support, and criticism from many angles.

Influential voices in the AI community have sparked intense debate over the bill.

Geoffrey Hinton, a Turing Award winner and ‘Godfather of AI,’ and Yoshua Bengio, a deep learning pioneer and professor at the University of Montreal, are largely in favor of it, while Yann LeCun, Chief AI Scientist at Meta, and Andrew Ng, co-founder of Coursera and Adjunct Professor at Stanford University, are strongly opposed.

It passed the California legislature on August 28th and now awaits Governor Newsom’s decision.

What Does California SB 1047 Mandate?

The bill targets developers of the largest AI models, defined by thresholds on training compute power and cost. It requires these developers to meet several safety criteria before training their models. Among these requirements, developers must implement quick-acting kill switches and enforce safety and security protocols focused on detecting and preventing catastrophic risks.

Additionally, developers are prohibited from using or providing a covered model if there is a risk that it could cause or enable critical harm—defined as a risk of mass casualties, attacks on infrastructure, or similarly catastrophic threats to public safety or security.

Developers must retain a third party to conduct compliance audits against these requirements. They must also share the unredacted audit report with the California Attorney General upon request. Developers must submit a statement of compliance and report all safety issues to the Attorney General.

Why Is SB 1047 Controversial? 

SB 1047 is controversial for several reasons.

Critics argue that the bill attempts to regulate foundation models rather than focusing on AI at the application level, which they believe would stifle innovation. Yann LeCun, Meta’s Chief AI Scientist, tweeted that holding developers accountable for their customers’ applications “will simply stop technology development.”  Another renowned AI scientist, Andrew Ng, used an analogy to make this point: electric motors are a technology, but we mostly worry about the dangerous applications of motors—guided missiles, less so microwaves—so it doesn’t make sense to hold motor builders accountable. 

Some critics argue that the bill’s enforcement mechanisms are too weak compared to the catastrophic harms it seeks to mitigate. Previous versions of the bill included criminal charges, which have since been downgraded to civil penalties.

Other critics, including OpenAI, argue that this type of AI regulation should be left to the federal government to avoid a patchwork of inconsistent state-level requirements. However, state governments are stepping up on AI regulation, given the federal government’s slow pace in taking action.

On the other side of the debate, proponents of the bill, including Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell, argue that the next generation of AI systems poses “severe risks” if “developed without sufficient care and oversight.” They describe the bill as the “bare minimum for effective regulation of this technology.”

What Happens Next With SB 1047?

SB 1047 passed the California state legislature on August 28, 2024.

The bill now moves to Governor Newsom’s desk, where he must sign or veto it by September 30th. It is currently unclear what the governor’s stance on the bill is and whether he’ll sign.

What Do Enterprise Executives and Business Leaders Need To Do?

Short Term

Most executives do not need to take any immediate action. This bill only applies to models that meet exceptionally high training cost thresholds. No current AI models—even heavyweights like GPT-4 and Claude 3—are believed to meet these thresholds, but given the pace of advancement, covered models could be developed in the near future.

Long Term

Stay informed about the bill’s implications, as it could set regulatory precedents for high-risk AI models in the future.

If passed, the bill may shift regulatory focus in the U.S. toward developers of high-risk AI, a departure from the current emphasis on those deploying the technology.

If this bill is passed it could also lead to increased scrutiny on extreme risks, such as AI-assisted bioweapons, possibly at the expense of the regulation of “benign” but significant harms like bias or consumer safety.

    Request A Demo

    Explore the leading AI governance platform.