EEOC Releases New Q&A Resource on Use of Artificial Intelligence Under Title VII

APPLIES TO

All Employers with 15 or more Employees

EFFECTIVE

May 18, 2023

  

QUESTIONS?

Contact HR On-Call

(888) 378-2456

Quick Look

  • The use of algorithmic decision-making software constitutes a “selection procedure,” when used to make or inform decisions about whether to hire, promote, terminate, or take similar employment-related actions toward applicants or current employees.
  • Employers may face liability under Title VII when using AI or other algorithmic decision-making software that creates a disproportionate effect of excluding people based on a protected classification.

Discussion

On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) released new guidance through a Questions and Answers Resource (Guidance), intended to assist employers in determining whether and how to monitor algorithmic decision-making tools when using such technology in employment-related decisions. The Guidance begins with a broad overview and definition of key terms regarding what constitutes automated systems and artificial intelligence, as well as a review of the different theories of discrimination that are prohibited under Title VII.

Specifically, Title VII prohibits disparate treatment and disparate impact discrimination. Disparate treatment discrimination is defined as intentional discrimination against an individual based on their membership in a protected class (i.e., race, sex, religion, national origin, etc.), whereas disparate impact or “adverse impact” discrimination arises when an employer uses a neutral policy, test or selection procedure that has a disproportionate effect of excluding people based on a protected classification. The EEOC notes that this current guidance on the use of AI technology under Title VII focuses primarily on the issue of disparate impact discrimination.

In 1978, the EEOC adopted the Uniform Guidelines on Employee Selection Procedures (the Guidelines), which has provided guidance to employers on how to determine if their neutral tests and selection procedures are lawful for purposes of Title VII disparate impact analysis. The EEOC’s current Q&A Guidance serves to expand upon the 1978 Guidelines, providing updated analysis on selection procedures performed by artificial intelligence or other algorithmic decision-making software when used for certain employment decisions.

The new Guidance explains that a “selection procedure” is any, “measure, combination of measures, or procedure” that employers use as a basis for an employment decision, and therefore, AI or other algorithmic decision-making tools would be subject to the 1978 Guidelines “when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions towards applicants or current employees.” The Guidance further explains that employers can assess whether a selection procedure has a disparate impact on a particular group by determining whether the procedure selects individuals in a protected group “substantially” less than individuals of another group. Under this new Guidance, if the AI system adversely affects applicants or employees of a particular protected category, then the system likely violates Title VII.

The Q&A Guidance also reviews the 1978 Guideline’s “four-fifths rule,” explaining that the rule is simply a rule of thumb, and “may be inappropriate under certain circumstances,” such as where AI makes a larger number of selections and thus smaller differences may reflect an adverse impact on certain groups, or where an employer’s actions disproportionately effect individuals in protected groups. Therefore, employers cannot blindly rely on the four-fifths rule to ensure compliance under Title VII.

Additionally, the new Guidance confirms that employers may be held responsible for AI decision-making that creates a disparate impact, “even if the tools are designed or administered by another entity.” This means, employers using an AI or other algorithmic decision-making software can still be found liable for discrimination under Title VII even if the software was developed by someone outside the company.

Developing a selection procedure using AI technology provides employers the opportunity to explore different algorithmic options, so if an employer finds that a certain algorithm creates a disproportionate exclusion of a certain protected classification, the employer can and should take steps to select a comparably alternative algorithm. Notably, the EEOC directs that an employer’s failure to adopt a less discriminatory algorithm during the development process may give rise to liability under Title VII.

 

Action Items

  1. Review the EEOC’s full Q&A Guidance on the use of AI under Title VII here.
  2. Conduct periodic audits of employment decision-making processes on an ongoing basis to ensure compliance with state and federal anti-discrimination laws and regulations.
  3. Implement policies and procedures addressing AI use in the workplace.
  4. Subscribers can call our HR On-Call Hotline at (888) 378-2456 for further assistance.

Disclaimer: This document is designed to provide general information and guidance concerning employment-related issues. It is presented with the understanding that ManagEase is not engaged in rendering any legal opinions. If a legal opinion is needed, please contact the services of your own legal adviser. © 2023 ManagEase