Mobley v. Workday: AI Bias Lawsuit Highlights and Implications 

  • May 16, 2025 to include key developments in the Mobley v. Workday lawsuit, including the court’s decision to certify the case as a collective action and implications for both employers and HR technology vendors. U.S. District Judge Rita Lin approved a collective action under the Age Discrimination in Employment Act (ADEA). “Allowing the case to proceed on behalft of applicants aged 40 and over who were allegedly dened employment recommendations through Workday’s platform since September 2020.”
  • July 12th, 2024: A federal judge has allowed a lawsuit against Workday to proceed, stating,
    “Workday’s software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others. Given Workday’s allegedly crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”
  • April 9th, 2024: The U.S. Equal Employment Opportunity Commission (EEOC) told a court that Workday should face claims regarding the biased algorithm-based applicant screening system. The case will be fascinating to watch as it underscores ongoing legal and societal challenges in ensuring AI hiring tools do not perpetuate bias, with broader implications for regulatory oversight and the technology’s ethical use. The case highlights the need for AI governance controls and processes.
  • February 20th 2024: The amended lawsuit redefines Workday’s role, aiming to clarify its liabilities under anti-discrimination law.
  • January 2024: The initial suit was dismissed due to insufficient evidence of Workday being classified as an “employment agency.”
  • February 2023: Derek Mobley filed a lawsuit against Workday, alleging its automated resume screening tool discriminates based on race, age, and disability status.


What Does the Mobley v. Workday Lawsuit Allege Regarding Automated Resume Screening Tools?

The Mobley v. Workday lawsuit alleges that the company’s automated resume screening tool discriminates based on race, age, and disability status, highlighting the broader issue of bias within AI-based hiring processes.

Mobley v. Workday Lawsuit Milestones

Mobley filed his first version of the suit in February 2023 in the United States District Court for the Northern District of California. The plaintiff, who is African American, over 40, and disabled, claimed to have applied for over 80 jobs believed to be using Workday’s screening tool, and his application was rejected every time.

The suit cited multiple examples of bias in AI-based screening tools (like the Amazon tool that learned to favor male candidates over female) and alleges that Workday’s resume screening tools also display bias based on Mobley’s experience.

Mobley seeks a declaration that Workday’s tools violate anti-discrimination laws, an injunction preventing Workday from using discriminative hiring algorithms, and monetary damages on behalf of himself and others with similar protected traits.

In January 2024, a judge dismissed the case because the original lawsuit did not offer enough evidence to classify Workday as an “employment agency” subject to liability under anti-discrimination law. The dismissal was not a statement that Workday’s software did not discriminate but a point of legalese about how anti-discrimination law should apply to the case. The Court also noted gaps in Mobley’s claims about what details about the protected traits he offered and his qualifications for the jobs he sought.

On February 20, 2024, Derek Mobley filed an amended lawsuit against Workday, claiming the company’s automated resume screening tool, which can evaluate resumes without human oversight and decide whether to reject the applicant, discriminates along the lines of race, age and disability status.

Update April 2024

On April 9th, the EEOC intervened in the Workday case by filing an amicus brief supporting the plaintiff.

The U.S. Equal Employment Opportunity Commission (EEOC) submitted a statement to the court, asserting that Workday should face claims of bias in its algorithm-based applicant screening system.

The EEOC argued that Workday’s software might enable discriminatory practices by allowing employers to exclude applicants from protected categories, violating Title VII of the Civil Rights Act of 1964.

Update: July 2024

A federal judge has allowed a discrimination lawsuit against Workday’s AI hiring software to proceed.

Workday’s attempt to dismiss the case was denied, with the judge ruling that the company’s software acts as an employer’s agent in the hiring process.

“Workday’s software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others,” the judge said. “Given Workday’s allegedly crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”

However, the judge did agree with Workday that the plaintiff did not convincingly argue that Workday functions as an employment agency, highlighting that the software does not recruit, solicit, or procure employees for companies. The court also sided with Workday, noting that the plaintiff did not provide specific evidence of intentional discrimination, despite showing that the tools had a disparate impact.

Update: May 2025

The legal strategy shifted from proving Workday is an “employment agency” to treating it as an “agent” of the employer. This distinction allowed the case to move forward.

Despite an executive order attempting to eliminate the use of “disparate impact” as a legal theory, the EEOC continues to support plaintiffs arguing AI tools can be discriminatory even without intent.

Lastly, the judge’s framing of Workday as a decision-making actor reinforces that vendors, not just employers, may share responsibility under employment law.

Update: June 2025

Following the court’s conditional certification of a collective under the ADEA, a case management conference was held to plan next steps, including the notice process for potentially impacted job applicants. The court reaffirmed that exact uniformity among applicants is not required at this stage—only that they were subject to the same AI-driven screening process.

This procedural step marks the transition into discovery, where both parties will gather evidence. The scope of the collective could be vast, given Workday’s software reportedly screened over a billion applications.

Workday may still move to decertify the collective later in the case, but for now, the court recognizes the AI tool as having a potentially central role in discriminatory hiring outcomes—reinforcing the idea that technology vendors may face accountability alongside employers.

Update: July 2025

On July 29th, Judge Rita Lin ordered that scope of the collective will now include individuals who were processed using HiredScore AI features. Workday must provide a list of customers who enabled HiredScore AI features by August 20th.

Workday argued against this, noting that HiredScore was acquired after Mobley’s original claim was filed and that HiredScore AI is substantially different from Workday’s own AI, but Judge Lin rejected these arguments.

Implications for Employers

Employers must realize that outsourcing parts of the hiring process with vendor AI does not absolve them of liability. While employers should expect evidence of responsible AI practices from vendors, employers are still responsible for part of the risk because they make the final decisions over how the AI is used and how much oversight is applied. Regulations like those in NYC, Colorado, Illinois and California require employers to take measures to ensure that AI used in the hiring process does not lead to discrimination.

Employers should conduct regular bias audits of their hiring technologies to identify, explain and mitigate issues. They should also ensure that humans are kept in the loop over hiring decisions so that appropriate oversight is applied.

Implications for HR Tech Vendors

While most AI regulation in the United States has focused more heavily on the users of AI, this case serves as a helpful reminder that the developers of AI systems still have their part to demonstrate their AI is safe and trustworthy.

HR technology providers must evaluate their AI systems for potential biases – both before and after it is released. And they should provide sufficient details to employers covering how the AI works and how it should be used responsibly.

Connector from Workday to AI Bias Audit Platform Page

AI developers should closely inspect their systems’ inputs, which could lead to inadvertent bias. Of note, Mobley attended Morehouse College, an HBCU, meaning it may have been possible to infer his race from his job application. Theoretically, it’s possible that an AI system may inadvertently pick up on this and provide lower model scores because of past discrimination in hiring decisions.

AI Bias Lawsuits Intensify

This is not the first lawsuit (and it won’t be the last) related to AI use and employment.

As companies continue to leverage AI-enabled tools for hiring, a combination of regulations and standards will continue to evolve to ensure responsible AI use.

New York City LL144, while perhaps limited in its scope and enforcement, requires anyone using such tools in hiring or promotion to conduct a bias audit and publish the results publicly.

Similarly, the EEOC has made it clear that employers and agencies using algorithm-based hiring tools in the hiring process are still liable for discrimination caused by the tools. As we have seen, such tools can still be biased, and their use does not provide a legal loophole for employers. The agency has taken action in similar cases before, such as the $365k fine against a tutoring company for using software to explicitly reject candidates for their age.

Accountability Is Evolving, and Precedent Is Forming

“Even if the plaintiffs prevail, appellate courts could decide the legal issues differently on whether the tech employers use is their legal agent. What we do know is that employers are responsible for the outcomes of their employment decisions no matter how those decisions were arrived at or what tech was involved.

Right now, all we really know is that a court has said it is legally possible to sue a tech company for employment discrimination based on how the AI system works. There’s a long road and a lot of questions to answer before we know if tech companies will be held legally responsible,” says Heather Bussing, seasoned employment attorney of 20+ years.

For AI developers and AI deployers, the takeaway is clear: legal exposure doesn’t begin and end with intention—it includes impact.

This isn’t the moment to wait and see. It’s the moment to act.

What Comes Next for AI Developers and Deployers:

  • Build with Governance from the Start
    Design systems with audit trails, fairness checks, and oversight built in—not bolted on.
  • Continuously Track AI Across Your Organization
    Maintain a real-time inventory of internal and third-party AI systems—what they do, where they’re used, and what risks they carry.
  • Monitor and Test for Bias on a Rolling Basis
    Use independent audits or synthetic fairness simulations to catch bias before it impacts decisions.
  • Stay Aligned with Shifting Regulations
    Map your AI systems to emerging laws like NYC LL 144, Colorado SB-205, the EU AI Act, and ISO 42001.
  • Apply Human Oversight Where It Matters Most
    Keep people in the loop—especially in high-impact areas like hiring, credit, and healthcare.

As an AI governance provider, we’re helping organizations move from reactive compliance to continuous accountability—with prebuilt policies, bias audit tooling, and regulatory readiness frameworks.

 

Mobley vs Workday Frequently Asked Questions (FAQs)

What is the Mobley v. Workday lawsuit about?

The lawsuit alleges that Workday’s automated resume screening tool discriminated against job applicants based on race, age (40+), and disability status. Derek Mobley claims he was rejected by employers using Workday’s software due to biased algorithms.

Is Workday being held liable for hiring discrimination?

As of May 2025, the court has allowed the case to proceed as a collective action under the Age Discrimination in Employment Act (ADEA). While Workday is not classified as an “employment agency,” it may still be treated as an agent of employers, meaning it could share legal responsibility for discriminatory outcomes.

How does Workday's software allegedly discriminate against job applicants?

The lawsuit claims that Workday’s AI-driven hiring system plays a direct role in rejecting or recommending candidates. Because the software may use proxy indicators (such as schools attended) that correlate with race, age, or disability, it could result in disparate impact, even without intent.

What does this lawsuit mean for employers using AI in hiring?

Employers are still responsible for discrimination, even if they use third-party tools. This case reinforces the importance of conducting bias audits, maintaining human oversight, and choosing vendors who follow responsible AI practices.

What steps should HR tech vendors take to avoid legal risk?

Vendors must evaluate their AI for bias before and after deployment, offer transparency into how algorithms work, and help clients stay compliant with laws like NYC Local Law 144, the EU AI Act, and emerging U.S. state laws.

Has the EEOC taken a position on the case?

Yes. In April 2024, the EEOC supported the plaintiff by submitting a legal brief stating that algorithmic hiring tools—like those used by Workday—can violate anti-discrimination laws, even without explicit intent.

What does this mean for the future of AI in hiring?

This case sets a precedent. It suggests that AI vendors may be held accountable alongside employers. It’s a strong signal that governance, transparency, and continuous bias testing are essential moving forward.

What’s the next step in the Mobley v. Workday case?

Now that the case has been certified as a collective action, it could open the door for other job applicants over 40 to join. Future proceedings will likely focus on discovery, expert testimony on algorithmic bias, and further legal rulings on liability.

How can I join the Workday lawsuit if I believe I was affected?

If you are over 40 and believe you were denied a job opportunity due to bias in Workday’s resume screening system (used by employers since September 2020), you may be eligible to join the collective action. You should consult with an employment attorney or follow updates from the court handling Mobley v. Workday for instructions on how to opt in.

Does Workday use AI to screen resumes?

Yes. Workday’s software uses algorithmic systems to evaluate and recommend job applicants based on employer-defined criteria. The lawsuit claims that Workday’s AI does more than follow instructions—it actively participates in the decision-making process, which may result in biased outcomes that affect equal employment opportunities.

What does the Mobley v. Workday lawsuit allege regarding automated resume screening tools?

The Mobley v. Workday lawsuit alleges that the company’s automated resume screening tool discriminates based on race, age, and disability status, highlighting the broader issue of bias within AI-based hiring processes.

Quick Summary of the Workday AI Lawsuit

  • Filed by: Derek Mobley in February 2023
  • Allegations: AI hiring bias based on age, race, disability
  • Legal Status (as of May 2025): Certified as a collective action
  • Involvement: EEOC filed amicus brief in support of claims
  • Workday’s Role: AI acts as employer’s “agent” in hiring
  • Who it Affects: Applicants over 40 or with protected characteristics since Sept 2020
About Guru Sethupathy

About Guru Sethupathy

Guru Sethupathy has spent over 15 years immersed in AI governance, from his academic pursuits at Columbia and advisory role at McKinsey to his executive leadership at Capital One and the founding of FairNow. When he’s not thinking about responsible AI, you can find him on the tennis court, just narrowly escaping defeat at the hands of his two daughters. Learn more on LinkedIn at https://www.linkedin.com/in/guru-sethupathy/

Explore the leading AI governance platform