What is the California ADT AI Law? A Guide to California AB 2930 (Formerly California AB 331)
Focuses on Private-sector AI Regulation
Prioritizes Transparency in AI Systems
Mandatory Impact Assessments
Aims to Mitigate Algorithmic Discrimination in Employment
FAQs About California’s AB 2930 (Formerly AB 331)
Steps to Achieve Compliance
High-Level Summary
AB 2930, formerly Assembly Bill 331, is a legislative proposal introduced in California to regulate the use of automated decision tools (ADTs) to prevent algorithmic discrimination.
The core requirements of the bill are to:
- Prohibit algorithmic discrimination through the regulation of ADTs
- Mandate impact assessments for ADTs
- Ensure transparency and accountability
- Impose governance requirements on both developers and deployers of ADTs
- Prevent unjustified differential treatment based on protected characteristics
- Ensure fair use of AI in making consequential decisions
California Assembly Bill 2930 defines ADTs as systems that use AI to make or significantly influence consequential employment decisions, specifically those related to hiring, promotion, termination, pay, and task allocation.
*Previous versions of the bill focused on a much broader set of high-risk applications, but in August 2024, the scope was narrowed to employment.
Scope
AB 2930 applies to both developers and deployers of automated decision tools (ADTs) used in employment.
Developers are responsible for creating and modifying ADTs, while deployers are entities that use these tools to make or significantly influence consequential decisions affecting individuals.
Compliance Requirements of AB 2930
The compliance requirements of AB 2930 include:
- Impact Assessments: Both developers and deployers must perform annual impact assessments on ADTs, detailing the purpose, intended benefits, and potential adverse impacts of the tools.
- Governance Programs: Deployers must establish governance programs to manage risks associated with ADTs, including designating responsible personnel, implementing safeguards, and conducting annual reviews.
- Transparency and Notification: Deployers must notify individuals when an ADT is used to make a consequential decision, providing information about the tool’s purpose and use.
- Public Policy Disclosure: Both developers and deployers must make publicly available a clear policy summarizing the types of ADTs in use and how risks are managed.
- Mitigation of Algorithmic Discrimination: Developers and deployers must address and mitigate any identified risks of algorithmic discrimination before using or making ADTs available.
Non-Compliance Penalties
Non-compliance with AB 2930 can result in administrative fines of up to $10,000 per violation, which can be recovered through administrative enforcement actions brought by the Civil Rights Department.
Additionally, civil penalties of $25,000 per violation can be imposed for cases involving algorithmic discrimination.
Status
As of the latest update, California AB 2930 is under consideration by the legislature.
The bill has passed the Assembly and is currently being reviewed by the Senate Judiciary Committee.
Update: On August 15th, 2024, the scope of the bill was limited to only certain employment-related applications.
How Can Companies Ensure Compliance with AB 2930?
Drawing from our ongoing work in AI governance and responsible AI, we’ve observed how leading organizations are adapting to California’s AI regulation.
To implement California’s AI Policy Act requirements, companies should:
- Conduct Impact Assessments: Regularly perform detailed assessments of AI tools to identify potential risks and adverse impacts.
- Establish Governance Programs: Develop comprehensive governance programs that include risk management strategies, designate responsible personnel, and implement safeguards.
- Ensure Transparency: Notify individuals when AI tools are used in consequential decisions, providing clear and accessible information about the tools’ purposes and impacts.
- Develop Public Policies: Make publicly available policies that summarize AI tools in use and how associated risks are managed.
- Monitor Compliance: Stay updated with regulatory changes and ensure ongoing compliance with all requirements to avoid penalties and maintain ethical AI practices.
How FairNow’s AI Governance Platform Helps
Built on deep industry expertise, FairNow’s AI Governance Platform addresses the unique challenges of high risk AI categories.
Our platform, designed by experts with experience in highly regulated industries, provides:
- Streamlined compliance processes and reduced reporting times
- A centralized AI inventory management with continuous risk assessment
- Clear accountability structures and human oversight implementation
- Robust policy enforcement supported by ongoing testing and monitoring
- Efficient regulation tracking and comprehensive compliance documentation
FairNow empowers organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.
Experience how our industry-informed platform can transform your AI governance. Book a free demo here.
AI compliance doesn't have to be so complicated.
Use FairNow's AI governance platform to: