AI Validation in Talent Acquisition 101

Feb 25, 2025

The Urgent Need for AI Validation in Talent Acquisition. Are You at Risk?

Talent acquisition processes are evolving as more companies integrate AI and algorithms to streamline recruitment. The “AI in Hiring 2025” survey reports that 99% of talent acquisition managers are using AI tools throughout their processes.1 These tools can quickly sort, rank, and filter candidates, saving time and resources. But, as with any new technology, there are potential pitfalls that, if ignored, could land your company in legal trouble. One of the biggest risks is failing to validate your AI talent acquisition tools, and the consequences can be significant.

What is AI Validation?

AI validation is the process of ensuring that your AI talent acquisition tools are not discriminating against certain groups of people. It stems from the legal concept of disparate impact, which means a seemingly neutral policy or practice that has a statistically significant negative effect on a protected group. This is the only case in US employment discrimination law where intent to discriminate does not need to be proven. If your AI is producing a discriminatory outcome, it is illegal unless you can prove job relatedness and consistency with business necessity through a validation process.

Who is Affected?

A lack of AI validation can impact a wide range of people, including those considered protected by the EEOC. Talent acquisition policies that disproportionately affect employees based on race, color, religion, sex, national origin, disability, or age—even if unintentional—can result in lawsuits, penalties, and a damaged reputation. If a workplace policy harms a protected group and isn’t directly tied to job performance, it could be considered illegal.2

If an algorithm, automation, or AI tool has a statistically significant impact on any of these groups, it is considered illegal, unless the employer can prove job relatedness and consistency with business necessity through a validation process.

 The Legal Environment

The legal requirement for AI validation is not new. The concept is rooted in the Uniform Guidelines on Employee Selection Procedures (UGESP), created by the EEOC decades ago. These guidelines outline how businesses can validate their talent acquisition practices to ensure they're fair and non-discriminatory. Unfortunately, many companies are unaware of this requirement. In addition, recent Executive Orders from the new U.S. Administration eliminating DEI initiatives for federal agencies and federal contractors has created confusion and questions about how the broader anti-discrimination regulations might be affected. 

In any event, the government's position is clear: If you're using a tool or an automation that discriminates, you're liable, even if the tool was created by a third party. This means that simply using an applicant tracking system or other HR technology doesn't absolve you of responsibility.

Two recent legal cases around bias in AI talent acquisition tools and automations are setting the stage for future litigation, and are important warning signals for employers to heed.

iTutorGroup, a company providing English-language tutoring services to students in China, was found to have violated the Age Discrimination in Employment Act (ADEA) by programming its online recruitment software to automatically reject older applicants.

The EEOC filed a lawsuit against iTutorGroup3, which was the first one involving a company using AI to make employment decisions. Key points for employers:

  • Age discrimination is illegal, even when automated: The iTutorGroup case demonstrates that employers are responsible for discrimination that occurs through AI.

  • The EEOC is focusing on AI and algorithmic fairness: The EEOC has launched an Artificial Intelligence and Algorithmic Fairness Initiative to ensure that AI and other technologies used in talent acquisition comply with federal civil rights laws. The EEOC is actively working to identify and address instances of AI misuse that lead to discrimination.

  • Remote workers are protected by anti-discrimination laws: The EEOC emphasizes that U.S. anti-discrimination laws protect even fully remote workers providing services to clients abroad.

  • Be aware of the data set: Algorithms themselves may not be the source of discrimination, but the data sets they pull from can be. Employers should ensure that their data sets do not contain information that could lead to discriminatory talent acquisition practices.

  • Asking age-related questions can cause legal issues: Employers in the U.S. should avoid asking age-related questions on applications.

  • The EEOC will be monitoring AI tools: The EEOC's Strategic Enforcement Plan includes focus on AI tools.

  • Lack of discriminatory intent is not a shield: Bias can be unintentionally baked into AI software.

  • Carefully consider AI talent acquisition tools: Many worker advocates and policymakers are concerned about the potential for biases to be incorporated into AI software.

  • Settlement terms included non-monetary relief designed to prevent discrimination: Even though iTutorGroup stopped hiring in the U.S., the settlement includes "extensive and continuing training for those involved in hiring tutors, issuance of a robust new anti-discrimination policy, and strong injunctions against discriminatory talent acquisition based on age or sex and requesting applicants’ birth dates."

The second case is ongoing, and involves Workday, a human resources software firm.3 This is a class action lawsuit alleging that its AI-powered software discriminates against job applicants based on race, age, and disability, violating Title VII of the Civil Rights Act of 1964, as well as other federal laws. The case, Mobley v. Workday Inc, is in the U.S. District Court for the Northern District of California, and could include tens or hundreds of thousands of people. Key points in this case for employers:

Allegations of Discrimination: The lawsuit claims Workday's AI tools screen out applicants in a discriminatory manner due to the algorithmic decision-making tools it uses. The plaintiff, Derek Mobley, alleges he was rejected for over 100 jobs despite meeting or exceeding requirements, suggesting the AI may be biased.

AI Bias: The core concern is that AI tools can discriminate if they are built using data that reflects existing biases. Workday allegedly uses data from a company's existing workforce to train its AI, which may perpetuate existing discrimination.

Legal Implications: The EEOC has stated that employers can be held legally liable if they fail to prevent screening software from having a discriminatory impact. The Judge stated that Workday could be considered an employer covered by federal laws banning workplace discrimination because it performs screening functions that its customers would normally carry out themselves.

Company Response: Workday denies any wrongdoing and states that it engages in an ongoing "risk-based review process" to ensure its products comply with applicable laws. Workday also argues that it does not have oversight or control of its customers’ job application processes, and its customers do not delegate control to Workday in regards to their talent acquisition processes.

Given the ubiquity of AI tools and automations being used in the talent acquisition process, it is important to remain aware of legal precedent in this area. 

The Ethical Imperative

Beyond the legal considerations, there's an ethical imperative to ensure that AI tools are being used responsibly. As AI becomes more sophisticated, it's essential for everyone involved—from tech companies to employers—to take ownership of the potential for discriminatory outcomes.

What to Do

If you're using AI in your talent acquisition process, it's time to take action. Here are some key steps:

  1. Assess your tools: Determine if the AI you're using has a disparate impact on any group of people.

  2. Understand the law: Familiarize yourself with the legal requirements for validation.

  3. Work with experts: Partner with companies specializing in AI validation, like AI Validation Lab, to ensure your tools are compliant and fair.

Conclusion 

The legal and ethical need for AI validation in talent acquisition is clear. Hiring and workplace policies must be validated as fair and legally sound to avoid legal issues. Ignoring this aspect not only poses a significant legal risk, but can also damage a company's reputation and result in discriminatory practices. By taking a proactive approach and ensuring that your AI tools are validated, you can foster a fair talent acquisition process, keep your business compliant, and create a stronger workforce that is aligned with your organizational values.

Sources:

  1. AI Hiring in 2025 Survey (Insight Global)

  2. Prohibited Employment Policies/Practices

  3. iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit

  4. Workday Must Face Novel Bias Lawsuit Over AI Screening (Reuters)

Additional Sources:

EEOC Sues iTutorGroup for Age Discrimination

EEOC Settles First AI Age Discrimination Lawsuit (American Society of Employers)

Tutoring Firm Settles US Agency’s First Bias Lawsuit Involving AI Software

Workday Accused of Facilitating Widespread Bias in Novel AI Lawsuit (Reuters)