A job applicant is suing SiriusXM Radio in federal court, claiming that the company’s use of AI hiring technology discriminated against him on the basis of race.
The suit, filed on August 4, alleges that SiriusXM’s use of iCIMS Applicant Tracking System perpetuated past bias in hiring decisions and disqualified him from job opportunities for which he was qualified.
The plaintiff is Arshon Harper, an African American man from Detroit with a decade of experience in IT. Harper applied for approximately 150 positions with SiriusXM for which he was qualified. He was rejected for all but one position, for which he received an interview but was rejected afterwards.
According to the suit, iCIMS AI evaluated candidates based on data points that can give away the candidates’ race, such as education and address. Such data points are not related to the jobs and their use resulted in intentional discrimination (i.e., disparate treatment) against African Americans, Harper claims.
Harper contends that his experience applying for jobs at SiriusXM, receiving no job offers and only one interview request, constitutes racial discrimination. He is seeking damages for lost wages and non-financial harms, and also wishes to classify the case as a class action lawsuit, allowing other affected job seekers to join.
Implications for Employers
This lawsuit serves as a reminder that employers can be held accountable for their use of AI – even if the technology was purchased from a vendor. The EEOC has made this point clear as well.
Regulations like those in NYC, Colorado, Illinois and California require employers to take measures to ensure that AI used in the hiring process does not lead to discrimination.
Employers should conduct regular bias audits of their hiring technologies to identify, explain, and mitigate issues. They should also ensure that humans are kept in the loop over hiring decisions so that appropriate oversight is applied. And they should evaluate whether the variables used in AI-powered hiring tools are relevant and appropriate to the job.
Implications for HR Tech Vendors
When an employer uses AI from a vendor, there’s a separation of responsibility and accountability. Where this line is drawn is far from a settled matter and can vary under different laws. While iCIMS is not being sued, the claims in the filing refer to iCIMS’s technology and the variables it uses.
AI developers should evaluate their models for bias regularly. They should closely evaluate that the variables used are valid and unlikely to cause discrimination. And they should provide effective guidance to users of the model so that they can use the AI in a safe and responsible manner.
Similar AI Bias Lawsuits
Mobley v. Workday
Mobley v. Workday is an ongoing collective action lawsuit that alleges Workday AI discriminates on the basis of disability.
Similar to the SiriusXM case, the plaintiff cites his experience of getting rejected from a large number of jobs as evidence of discrimination. However, Mobley v. Workday is different because Mobley sued the developer and not the employer(s).
“Employers are always liable for discrimination, whether or not technology is involved. In Harper v. SiriusXM, the plaintiff sued only the employer. While we don’t know the exact reasoning, it’s likely because the law clearly holds companies responsible for hiring outcomes—even when those outcomes are shaped by software.” — Heather Bussing, Employment Lawyer
Intuit / HireVue Discrimination Complaint
An Indigenous and deaf woman filed a complaint against Intuit, her employer, and HireVue, which provides AI hiring software.
The complaint alleges bias on the basis of the woman’s race and disability because she was denied a promotion after using HireVue’s video interview product. HireVue’s technology uses AI to transcribe conversations, but these capabilities are less effective for deaf individuals who may have different speech patterns. The woman requested but did not receive an accommodation.
While bias is a focus of this complaint, it also raises the question of reasonable accommodations for AI-powered assessments.
What Comes Next for AI Developers and Deployers:
- Build with Governance from the Start
Design systems with audit trails, fairness checks, and oversight built in—not bolted on.
- Continuously Track AI Across Your Organization
Maintain a real-time inventory of internal and third-party AI systems—what they do, where they’re used, and what risks they carry.
- Monitor and Test for Bias on a Rolling Basis
Use independent audits or synthetic fairness simulations to catch bias before it impacts decisions.
- Stay Aligned with Shifting Regulations
Map your AI systems to emerging laws like NYC LL 144, Colorado SB-205, the EU AI Act, and ISO 42001.
- Apply Human Oversight Where It Matters Most
Keep people in the loop – especially in high-impact areas like hiring, credit, and healthcare.
As an AI governance provider, we’re helping organizations move from reactive compliance to continuous accountability—with prebuilt policies, bias audit tooling, and regulatory readiness frameworks.
Related Articles
- Workday Lawsuit Over AI Resume Screening Bias: With July 29, 2025 Court Update
Harper v. SiriusXM Radio AI Hiring Bias Lawsuit FAQs
What is the Harper v. SiriusXM Radio lawsuit about?
The lawsuit claims that SiriusXM’s use of AI hiring technology discriminates against African American job seekers through the use of AI-powered hiring technologies. The plaintiff claims he applied for 150 jobs and was rejected from all of them.
How does this lawsuit differ from Mobley v. Workday?
According to employment lawyer Heather Bussing, Harper v. SiriusXM Radio, LLC is focused solely on employer liability. The plaintiff is suing SiriusXM for using iCIMS’s AI-powered hiring system in a way that allegedly resulted in discriminatory rejections. Under employment law, employers are always liable for discrimination in their hiring practices—whether or not technology is involved.
Bussing notes that while we don’t know the plaintiff’s exact reasoning for not suing iCIMS, there are likely practical and legal factors at play. The plaintiff is self-represented, liability for technology providers is less settled under current law, and pursuing claims against a software vendor would typically require significant expert testimony and resources.
By contrast, Mobley v. Workday is testing new ground: whether AI technology providers themselves can be held liable for causing or contributing to discriminatory outcomes when their tools are used by employers.
Put simply:
-
Harper v. SiriusXM → employer liability for discriminatory use of AI.
-
Mobley v. Workday → vendor liability for building discriminatory AI.
As Bussing points out, while plaintiffs may seek fairness, the remedies available in lawsuits are generally financial damages rather than systemic reform—another reminder of why proactive AI governance matters.
How did SiriusXM allegedly discriminate against job applicants?
The lawsuit claims that iCIMS AI, which SiriusXM uses to score job applicants, used data points that can give away the candidate’s race, like schools attended or zip code.
What does this lawsuit mean for employers using AI in hiring?
Employers can still be held liable for discrimination caused by AI, even if they did not build the technology. The EEOC has made this point clear.
What steps should employers take to avoid legal risk?
Employers should conduct their own bias assessments of AI technologies, provide job candidates with appropriate transparency around the use of AI, ensure capable humans have oversight over AI systems, and keep documentation of all governance mechanisms in place.
What steps should HR tech vendors take to avoid legal risk?
Vendors must evaluate their AI for bias before and after deployment, stay compliant with laws like the EU AI Act and emerging U.S. state laws, and provide appropriate descriptions and guidance so that employers can use the AI safely and responsibly.