EEOC, DOJ, CFPB and FTC Issue Joint Statement on Automated Systems and AI Concerns

APPLIES TO

All Employers with 15 or more Employees

EFFECTIVE

April 25, 2023

QUESTIONS?

Contact HR On-Call

(888) 378-2456

Quick Look

  • The joint statement by the EEOC, DOJ, CFPB and FTC raised concerns with the growing use of AI in making employment-related decisions.
  • The agencies pledge to enforce federal law to promote “responsible innovation.”

Discussion

On April 25, 2023, the Equal Employment Opportunity Commission (EEOC), Department of Justice (DOJ) Civil Rights Division, Consumer Financial Protection Bureau (CFPB), and the Federal Trade Commission (FTC) issued a joint statement, highlighting their concerns that emerging artificial intelligence (AI) technology could impact civil rights, fair competition, consumer protection, and equal employment opportunities.

As outlined in the joint statement, “automated systems” broadly refers to software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. The agencies believe that these types of technologies have the potential to produce outcomes that may result in unlawful discrimination, when used to make critical decisions that impact individual’s rights and opportunities. While the agencies acknowledge that such tools can be useful in certain situations, the joint statement explained the agencies have concerns with how the tools rely on “vast amounts of data to find patterns or correlations” to “perform tasks or make recommendations and predictions.”

The joint statement names three potential sources where discrimination can arise within AI technology: (1) Data and Datasets; (2) Model Opacity and Access; and (3) Design and Usage. Specifically, the agencies note a potential for outcomes to be skewed by unrepresentative or imbalanced datasets that incorporate historical bias, meaning, that if the data fed into the AI is limited, biased, or of low-quality, the only results that the AI can produce will be similarly limited in scope or contain similar biases. Additionally, the agencies raise concerns that many automated systems are “black boxes,” and without an ability to truly understand the inner workings of how these systems make their decisions, businesses and individuals cannot know whether such decisions are made fairly. Lastly, the agencies note that in certain situations, automated systems may be designed for certain use based on flawed assumptions about its users, relevant context, or the underlying practices or procedures it is intended to replace, causing additional discrepancies when the systems are implemented differently than designed.

The joint statement indicates the EEOC, DOJ, CFPB and FTC’s commitment to enforcing federal laws to “promote responsible innovation” in the context of automated decision-making and AI technology. This comes as part of continued efforts by federal regulators to take a closer look at how automated decision-making technology is being used in the workplace. Several states have also begun to regulate certain types of AI tools and their use in employment decisions. Therefore, employers should continue to monitor developments in this area.

 

Action Items

  1. Audit AI tools for compliance with federal and state laws.
  2. Implement policies and procedures addressing AI use in the workplace.
  3. Subscribers can call our HR On-Call Hotline at (888) 378-2456 for further assistance.

Disclaimer: This document is designed to provide general information and guidance concerning employment-related issues. It is presented with the understanding that ManagEase is not engaged in rendering any legal opinions. If a legal opinion is needed, please contact the services of your own legal adviser. © 2023 ManagEase