Home 5 FairNow Blog 5 Understanding NYC’s Local Law 144 to Regulate AI in Hiring

Understanding NYC’s Local Law 144 to Regulate AI in Hiring

Apr 7, 2023 | FairNow Blog

By Stephen Jordan
Responsible AI Development

New York City recently approved the final version of Local Law 144 which regulates the use of automated employment decisioning tools (AEDTs) in the City. Most notably, the law prohibits employers and employment agencies from using AEDTs unless assessed by an independent auditor for bias in the last 12 months. The law is intended to prevent disparate impact in hiring, where certain groups are discriminated against on the basis of criteria like gender and race.

The bill was first introduced in 2021 and went through several revisions based on public feedback. The final version was published on March 6th 2023 and enforcement will begin on July 5th, 2023.

Summary of the law

The following is a summary of the law. A more comprehensive write-up is coming soon!

The law covers employers and employment agencies using AEDTs on candidates residing in NYC. The law defines an AEDT as a tool that provides a simplified output (like a machine learning model score, tag, classification. or rank) that is used as the only or the most significant criteria for a candidate to move forward in a hiring process.

The requirements consist of three parts:

  • First, employers and employment agencies cannot use an AEDT unless a bias audit has been conducted by an independent auditor in the past 12 months. Full expectations for this audit are provided in the bill, but the critical focus is the differences in selection rate (or scoring rate in case of models with numerical score outputs) between different groups based on protected characteristics of gender and race/ethnicity. 
  • Second, employers are expected to publish the results of their most recent bias audit.
  • Third, employers must give candidates notification of AEDT usage and the criteria the AEDT will assess, and give candidates an opportunity to request an alternative assessment method, if feasible. This notification must be given at least 10 business days before AEDT usage.

Penalties for non-compliance are a $500 fine for the first violation and $1,500 fines for subsequent violations.

Our analysis

This law is the first of its kind in requiring bias assessments for AEDTs in the US. Maryland and Illinois have passed laws requiring disclosure of use (in Illinois’s case) and candidate consent (in Maryland’s case) before usage of AI-assisted video interviewing, but this bill is stronger in that it requires an independent audit assessing for disparate impact, and it covers a much broader set of AI-assisted interviewing tools.

New Jersey is considering similar legislation with their A4909 bill, which has not been passed yet. While both bills mandate bias audits before AEDTs can be used, there are some differences between the two. The most notable difference is that the NJ law only applies to vendors selling or marketing AEDTs for use in the state – under the current proposal, no such requirement exists for companies using internally developed AEDTS. The two bills also differ in how AEDTs are defined between the two bills, how the bias audit should be conducted and the type of notice required to candidates (NJ A4909 requires notification after usage, NYC LL144 requires advanced notice).

We disagree with the law on several points. First, we believe that the law is too narrow in how it defines AEDTs. It excludes technologies like generative AI, which doesn’t have a simplified output but still has the potential to be biased. Second, their bar of “only or most significant criteria” is much too narrow. AI tools can shape decision-making even if they are part of a broader toolkit of decision-making. Third, we disagree with the expectation that employers should publish the results. Raw impact ratios can be taken out of context and will have the effect of chilling adoption of even robust and fair AEDTs. We want to encourage innovation and adoption of responsible AI, but this will likely have the effect of perpetuating human processes which can also be quite biased, if not more so. Fourth, we disagree with giving candidates the option to opt out. If the AEDT is shown to be unbiased, why should candidates have the option to opt out? This will only increase friction and slow down the adoption of responsible AI.

Conclusion

This law is a first step at preventing bias in AEDTs, such as the 2018 case of Amazon’s resume screening tool that displayed a clear bias against women. But we are in the top of the first inning with regards to AI regulation, and in the future we expect more laws to follow in more jurisdictions, the definition of an AEDT to broaden, and the development of market standards that set increased expectations for fair and explainable AI and ML. The need for demonstrating fairness in AI is only going to increase. FairNow can help you ensure and demonstrate that your AI tools are fair and compliant. If interested, please reach out!

Request A Demo

Explore the leading AI governance platform.