Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

What is the California ADT AI Law? A Guide to California AB 2930 (Formerly AB 331)

Seeks to Ban Discriminatory AI Across the Private Sector

Prioritizes Transparency in AI Systems

Mandatory Impact Assessments

Aims to Mitigate Algorithmic Discrimination in Employment

Utah AI Policy Act Compliance

FAQs About California’s AB 2930 (Formerly AB 331)

Steps to Achieve Compliance

High-Level Summary

**Although AB 2930 ultimately did not pass in 2024, its chief sponsor has already announced plans to reintroduce the bill during the 2025 session, with amendments to address criticisms and clarify definitions.**

AB 2930, formerly Assembly Bill 331, is a legislative proposal introduced in California to regulate the use of automated decision tools (ADTs) to prevent algorithmic discrimination.

The core requirements of the bill are to:

  • Prohibit algorithmic discrimination through the regulation of ADTs
  • Mandate impact assessments for ADTs
  • Ensure transparency and accountability
  • Impose governance requirements on both developers and deployers of ADTs
  • Prevent unjustified differential treatment based on protected characteristics
  • Ensure fair use of AI in making consequential decisions

California Assembly Bill 2930 defines ADTs as systems that use AI to make or significantly influence consequential employment decisions, specifically those related to hiring, promotion, termination, pay, and task allocation.

*Previous versions of the bill focused on a much broader set of high-risk applications, but in August 2024, the scope was narrowed to employment.

Scope

AB 2930 applies to both developers and deployers of automated decision tools (ADTs) used in employment.

Developers are responsible for creating and modifying ADTs, while deployers are entities that use these tools to make or significantly influence consequential decisions affecting individuals.

Compliance Requirements of AB 2930

The compliance requirements of AB 2930 include:

  1. Impact Assessments: Both developers and deployers must perform annual impact assessments on ADTs, detailing the purpose, intended benefits, and potential adverse impacts of the tools.
  2. Governance Programs: Deployers must establish governance programs to manage risks associated with ADTs, including designating responsible personnel, implementing safeguards, and conducting annual reviews.
  3. Transparency and Notification: Deployers must notify individuals when an ADT is used to make a consequential decision, providing information about the tool’s purpose and use.
  4. Public Policy Disclosure: Both developers and deployers must make publicly available a clear policy summarizing the types of ADTs in use and how risks are managed.
  5. Mitigation of Algorithmic Discrimination: Developers and deployers must address and mitigate any identified risks of algorithmic discrimination before using or making ADTs available.

Non-Compliance Penalties

Non-compliance with AB 2930 can result in administrative fines of up to $10,000 per violation, which can be recovered through administrative enforcement actions brought by the Civil Rights Department.

Additionally, civil penalties of $25,000 per violation can be imposed for cases involving algorithmic discrimination.

Status

As of the latest update, California AB 2930 is under consideration by the legislature.

The bill has passed the Assembly and is currently being reviewed by the Senate Judiciary Committee.

Update: On August 15th, 2024, the scope of the bill was limited to only certain employment-related applications.

How Can Companies Ensure Compliance with AB 2930?

Drawing from our ongoing work in AI governance and responsible AI, we’ve observed how leading organizations are adapting to California’s AI regulation.

To implement California’s AI Policy Act requirements, companies should:

  1. Conduct Impact Assessments: Regularly perform detailed assessments of AI tools to identify potential risks and adverse impacts.
  2. Establish Governance Programs: Develop comprehensive governance programs that include risk management strategies, designate responsible personnel, and implement safeguards.
  3. Ensure Transparency: Notify individuals when AI tools are used in consequential decisions, providing clear and accessible information about the tools’ purposes and impacts.
  4. Develop Public Policies: Make publicly available policies that summarize AI tools in use and how associated risks are managed.
  5. Monitor Compliance: Stay updated with regulatory changes and ensure ongoing compliance with all requirements to avoid penalties and maintain ethical AI practices.

How FairNow’s AI Governance Platform Helps

Built on deep industry expertise, FairNow’s AI Governance Platform addresses the unique challenges of high risk AI categories.

Our platform, designed by experts with experience in highly regulated industries, provides:

  • Streamlined compliance processes and reduced reporting times
  • A centralized AI inventory management with continuous risk assessment
  • Clear accountability structures and human oversight implementation
  • Robust policy enforcement supported by ongoing testing and monitoring
  • Efficient regulation tracking and comprehensive compliance documentation

FairNow empowers organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.

Experience how our industry-informed platform can transform your AI governance. Book a free demo here.

AI compliance doesn't have to be so complicated.

Use FairNow's AI governance platform to:

Effortlessly ensure your AI is in harmony with both current and upcoming regulations

Ensure that your AI is fair and reliable using our proprietary testing suite

Stay ahead of compliance requirements before fees and fines become commonplace

Explore the leading AI governance platform