What are California’s Rules for Automated Decision Systems?
A Guide to Fair Employment & Housing Act Compliance
Effective October 1, 2025
Bias testing not required, but strongly incentivized
Employers should be aware of need for accommodations
FAQs About FEHA and Automated Decision Systems
Steps to Achieve Compliance
High-Level Overview
1) Although bias testing is not required, the new rules make it the best defense against claims of discrimination for employers.
The California Civil Rights Council passed these rules to clarify the expectations for employers in attempting to use automated-decision systems (ADSs) in ways that comply with the California Fair Employment and Housing Act. While the rules do not require that employers conduct bias testing on any software they use in hiring or employment, they explicitly state that bias testing or the lack of it can be used as strong evidence in a discrimination suit.
2) Recordkeeping requirements extend to ADS
Employers using ADSs to assist with hiring or employment decisions should know that records about that system’s use, including data inputs and outputs as well as data about any refinement or training that the system underwent specific to their organization, must now be kept for four years.
3) California’s list of protected attributes is long, and they may require accommodations from certain systems.
Employers should be aware that California protects a long list of attributes above and beyond those found in national anti-discrimination legislation. Given the possibility for certain systems to discriminate against individuals with certain disabilities or accents, organizations should be aware and prepare alternatives if necessary.
California ADS Rules Scope
How do the new rules define an “automated-decision system”?
For the purposes of these rules, an automated-decision system is “a computational process that makes a decision or facilitates human decision making regarding an employment benefit…an Automated-Decision System may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” Examples of covered tasks include assessing applicants, screening resumes, or directing recruitment.
In other words, this new rule governs a wide variety of tools used to help employers evaluate and screen current employees and job applicants. Most importantly, an ADS may involve not just an evaluation built on recent LLMs, but more traditional algorithms not typically thought of as involving AI.
Who do the California Civil Rights Department rules apply to?
The new ADS rules cover any employer with more than five employees doing business in California, or any agent of theirs involved in decision making about employment benefits, such as employment agencies.
What are the compliance requirements for employers?
Employers are now required to maintain records about the inputs and outputs of any ADS used in an employment context for four years, as well as any data involved in the fine-tuning or training of the system for their deployment.
Importantly, the rules do not require deployers to have conducted a bias test or audit on an ADS. However, they do explicitly state that anti-bias testing, or a lack of anti-bias testing, could be considered evidence as part of a defense against a claim of unlawful discrimination. In general, pre-deployment risk assessment and due diligence is encouraged.
The rules also make it clear that under certain circumstances, ADS systems could have the effect of discriminating against certain classes of individuals. For example, an automated phone screener that evaluates a job applicant’s communication could have the effect of discriminating against individuals with foreign accents, or those with speech impairments. Employers should be aware that certain AI or ADS tools may therefore require alternative accommodations.
Non-Compliance Penalties
What are the non-compliance penalties for the California ADS rules?
There are no penalties from the State of California or other government bodies for violating these new rules. However, if an organization becomes involved in an employment discrimination suit and cannot produce relevant records, that failure may result in additional fines or penalties.
Similarly, organizations that deploy ADSs in their employment processes without anti-bias testing or other due diligence may find themselves with limited defenses in the event of a claim of discrimination, which can result in paying significant damages to applicants or employees.
Status
When do these new rules from the California Civil Rights Department go into effect?
After a lengthy proposal process, the rules were approved by the Calfornia Civil Rights Council in May and will go into effect on October 1, 2025.
Steps To Compliance
How can organizations comply with the Civil Rights Department’s requirements for automated-decision systems?
In order for companies to assess their current use of ADS in hiring and employment processes, certain governance practices seem necessary.
Drawing from our extensive work in AI governance and compliance, we’ve identified five best practices to ensure compliance:
- Adopt an AI Governance or Risk Management program. Although specific requirements differ across jurisdictions, the basic principles in frameworks such as the NIST AI RMF or ISO 42001 will be broadly useful around the world.
- Build an inventory of your AI applications. A risk assessment can help determine which tools deployed by your own HR processes, or those of your agents, that will qualify under these rules and may require anti-bias testing, or may be at risk of discriminating and require you to set up alternative accommodations.
- Ensure you have record retention, documentation and controls around the inputs and outputs of any automated-decision systems, especially those used in employment. Make particular note of what kinds of personal information may be included, and if it could be output by the model.
Staying informed and engaged will be key to ensuring compliance with the ADS rules for the California Fair Employment and Housing Act.
AI Compliance Tools
How FairNow’s AI Governance Platform Helps
Developed by specialists in AI risk management, testing and compliance, FairNow’s AI Governance Platform is tailored to tackle the unique challenges of AI risk management. FairNow provides:
- Streamlined compliance processes, reducing reporting times
- Centralized AI inventory management with intelligent risk assessment
- Clear accountability frameworks and human oversight integration
- Ongoing testing and monitoring tools
- Efficient regulation tracking and comprehensive compliance documentation
FairNow enables organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.
Experience how our industry-informed platform can simplify AI governance.
AI compliance doesn't have to be so complicated.
Use FairNow's AI governance platform to: