Home 5 FairNow Blog 5 Workday Accused Of Bias In AI Resume Screening: As of May 16, 2025

Workday Accused Of Bias In AI Resume Screening: As of May 16, 2025

May 27, 2025 | FairNow Blog

By Guru Sethupathy
Workday AI Bias Lawsuit - Updated May 16, 2025

Key Takeaways From the Mobley v. Workday Lawsuit (Explained in Plain Language)

 

🔄 As of May 16, 2025 to include key developments in the Mobley v. Workday lawsuit, including the court’s decision to certify the case as a collective action and implications for both employers and HR technology vendors.

  • May 16, 2025:  U.S. District Judge Rita Lin approved a collective action under the Age Discrimination in Employment Act (ADEA).

“Allowing the case to proceed on behalft of applicants aged 40 and over who were allegedly dened employment recommendations through Workday’s platform since September 2020.”

  •  July 12th, 2024: A federal judge has allowed a lawsuit against Workday to proceed, stating

“Workday’s software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others. Given Workday’s allegedly crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”

  • April 9th, 2024: The U.S. Equal Employment Opportunity Commission (EEOC) told a court that Workday should face claims regarding the biased algorithm-based applicant screening system
  • The case will be fascinating to watch as it underscores ongoing legal and societal challenges in ensuring AI hiring tools do not perpetuate bias, with broader implications for regulatory oversight and the technology’s ethical use.
  • The case highlights the need for AI governance controls and processes.
  • February 20th, 2024: The amended lawsuit redefines Workday’s role, aiming to clarify its liabilities under anti-discrimination law.
  • January 2024: The initial suit was dismissed due to insufficient evidence of Workday being classified as an “employment agency.”
  • February 2023: Derek Mobley filed a lawsuit against Workday, alleging its automated resume screening tool discriminates based on race, age, and disability status.

What Does the Mobley v. Workday Lawsuit Allege Regarding Automated Resume Screening Tools?

The Mobley v. Workday lawsuit alleges that the company’s automated resume screening tool discriminates based on race, age, and disability status, highlighting the broader issue of bias within AI-based hiring processes.

Timeline of Mobley v. Workday Lawsuit

On February 20, 2024, a man named Derek Mobley filed an amended lawsuit against Workday, claiming the company’s automated resume screening tool, which can evaluate resumes without human oversight and decide whether to reject the applicant, discriminates along the lines of race, age and disability status.

Mobley filed his first version of the suit in February 2023 in the United States District Court for the Northern District of California. The plaintiff, who is African American, over 40, and disabled, claimed to have applied for over 80 jobs believed to be using Workday’s screening tool, and his application was rejected every time.

The suit cited multiple examples of bias in AI-based screening tools (like the Amazon tool that learned to favor male candidates over female) and alleges that Workday’s resume screening tools also display bias based on Mobley’s experience.

Mobley seeks a declaration that Workday’s tools violate anti-discrimination laws, an injunction preventing Workday from using discriminative hiring algorithms, and monetary damages on behalf of himself and others with similar protected traits.

In January 2024, a judge dismissed the case because the original lawsuit did not offer enough evidence to classify Workday as an “employment agency” subject to liability under anti-discrimination law. The dismissal was not a statement that Workday’s software did not discriminate but a point of legalese about how anti-discrimination law should apply to the case. The Court also noted gaps in Mobley’s claims about what details about the protected traits he offered and his qualifications for the jobs he sought.

Update April 9, 2024

On April 9th, the EEOC intervened in the Workday case by filing an amicus brief supporting the plaintiff.

The U.S. Equal Employment Opportunity Commission (EEOC) submitted a statement to the court, asserting that Workday should face claims of bias in its algorithm-based applicant screening system.

The EEOC argued that Workday’s software might enable discriminatory practices by allowing employers to exclude applicants from protected categories, violating Title VII of the Civil Rights Act of 1964.

Update: July 12, 2024

A federal judge has allowed a discrimination lawsuit against Workday’s AI hiring software to proceed.

Workday’s attempt to dismiss the case was denied, with the judge ruling that the company’s software acts as an employer’s agent in the hiring process.

“Workday’s software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others,” the judge said. “Given Workday’s allegedly crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”

However, the judge did agree with Workday that the plaintiff did not convincingly argue that Workday functions as an employment agency, highlighting that the software does not recruit, solicit, or procure employees for companies. The court also sided with Workday, noting that the plaintiff did not provide specific evidence of intentional discrimination, despite showing that the tools had a disparate impact.

Update: May 16, 2025

The legal strategy shifted from proving Workday is an “employment agency” to treating it as an “agent” of the employer. This distinction allowed the case to move forward.

Despite an executive order attempting to eliminate the use of “disparate impact” as a legal theory, the EEOC continues to support plaintiffs arguing AI tools can be discriminatory even without intent.

Lastly, the judge’s framing of Workday as a decision-making actor reinforces that vendors, not just employers, may share responsibility under employment law.

Implications for Employers

Employers must realize that outsourcing parts of the hiring process with vendor AI does not absolve them of liability. While employers should expect evidence of responsible AI practices from vendors, employers are still responsible for part of the risk because they make the final decisions over how the AI is used and how much oversight is applied. Regulations like those in NYC, Colorado, Illinois and California require employers to take measures to ensure that AI used in the hiring process does not lead to discrimination.

Employers should conduct regular bias audits of their hiring technologies to identify, explain and mitigate issues. They should also ensure that humans are kept in the loop over hiring decisions so that appropriate oversight is applied.

Implications for HR Tech Vendors

While most AI regulation in the United States has focused more heavily on the users of AI, this case serves as a helpful reminder that the developers of AI systems still have their part to demonstrate their AI is safe and trustworthy.

HR technology providers must evaluate their AI systems for potential biases – both before and after it is released. And they should provide sufficient details to employers covering how the AI works and how it should be used responsibly.

Connector from Workday to AI Bias Audit Platform Page

AI developers should closely inspect their systems’ inputs, which could lead to inadvertent bias. Of note, Mobley attended Morehouse College, an HBCU, meaning it may have been possible to infer his race from his job application. Theoretically, it’s possible that an AI system may inadvertently pick up on this and provide lower model scores because of past discrimination in hiring decisions.

AI Bias Lawsuits Intensify

This is not the first lawsuit (and it won’t be the last) related to AI use and employment.

As companies continue to leverage AI-enabled tools for hiring, a combination of regulations and standards will continue to evolve to ensure responsible AI use.

New York City LL144, while perhaps limited in its scope and enforcement, requires anyone using such tools in hiring or promotion to conduct a bias audit and publish the results publicly.

Similarly, the EEOC has made it clear that employers and agencies using algorithm-based hiring tools in the hiring process are still liable for discrimination caused by the tools. As we have seen, such tools can still be biased, and their use does not provide a legal loophole for employers. The agency has taken action in similar cases before, such as the $365k fine against a tutoring company for using software to explicitly reject candidates for their age.

Accountability Is Evolving, and Precedent Is Forming

“Even if the plaintiffs prevail, appellate courts could decide the legal issues differently on whether the tech employers use is their legal agent. What we do know is that employers are responsible for the outcomes of their employment decisions no matter how those decisions were arrived at or what tech was involved.

Right now, all we really know is that a court has said it is legally possible to sue a tech company for employment discrimination based on how the AI system works. There’s a long road and a lot of questions to answer before we know if tech companies will be held legally responsible,” says Heather Bussing, seasoned employment attorney of 20+ years.

For AI developers and AI deployers, the takeaway is clear: legal exposure doesn’t begin and end with intention—it includes impact.

This isn’t the moment to wait and see. It’s the moment to act.

What Comes Next for AI Developers and Deployers:

  • Build with Governance from the Start
    Design systems with audit trails, fairness checks, and oversight built in—not bolted on.
  • Continuously Track AI Across Your Organization
    Maintain a real-time inventory of internal and third-party AI systems—what they do, where they’re used, and what risks they carry.
  • Monitor and Test for Bias on a Rolling Basis
    Use independent audits or synthetic fairness simulations to catch bias before it impacts decisions.
  • Stay Aligned with Shifting Regulations
    Map your AI systems to emerging laws like NYC LL 144, Colorado SB-205, the EU AI Act, and ISO 42001.
  • Apply Human Oversight Where It Matters Most
    Keep people in the loop—especially in high-impact areas like hiring, credit, and healthcare.

As an AI governance provider, we’re helping organizations move from reactive compliance to continuous accountability—with prebuilt policies, bias audit tooling, and regulatory readiness frameworks.

 

About Guru Sethupathy

About Guru Sethupathy

Guru Sethupathy has spent over 15 years immersed in AI governance, from his academic pursuits at Columbia and advisory role at McKinsey to his executive leadership at Capital One and the founding of FairNow. When he’s not thinking about responsible AI, you can find him on the tennis court, just narrowly escaping defeat at the hands of his two daughters. Learn more on LinkedIn at https://www.linkedin.com/in/guru-sethupathy/

Explore the leading AI governance platform