California: New AI Regulations Coming for Employers!
APPLIES TO Employers with 5+ Employees in CA |
EFFECTIVE Pending |
QUESTIONS? Contact HR On-Call |
Quick Look
|
Discussion
On May 17, 2024, the California Civil Rights Department issued proposed rules for when and how employers may use AI tools in employment decisions. AI tools or “automated-decision systems” are computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts applicants or employees. An automated-decision system may be derived from and/or use machine-learning, algorithms, statistics, and/or other data processing or artificial intelligence techniques.
The main theme of the proposed rules is that employers will be liable for any violation of anti-discrimination laws committed through the use of AI tools, just as they would be liable for any violation not involving AI tools. Specifically, the proposed rules would make it unlawful for an employer to use an AI tool that has an adverse impact against applicants or employees on the basis of any characteristics protected under the Fair Employment and Housing Act (FEHA). Adverse or disparate impact means “the use of a facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by [FEHA].”
Similarly, the same defenses would be available to employers using AI tools as would be available if they did not use AI tools, including an employer’s ability to demonstrate that the use of the AI tool was job-related and consistent with business necessity and that there was no less discriminatory, equally effective policy or practice. Further, evidence that an employer subjected an AI tool to anti-bias testing or made similar efforts to avoid unlawful discrimination, including evidence of the quality, recency, and scope of such efforts, is relevant to an employer’s defense.
Notably, the proposed rules give examples of the types of tasks using AI tools that could have an adverse impact on applicants and employees or otherwise be in violation of existing FEHA protections:
- AI tools that “rank” or “prioritize” applicants based on their schedules may have an adverse impact on applicants based on their religious creed, disability, or medical condition.
- AI tools that measure an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may have an unlawful adverse impact on individuals with certain disabilities or other protected characteristics.
- AI tools that analyze an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may have an unlawful adverse impact on individuals based on race, national origin, gender, or a number of other protected characteristics.
- AI tools would specifically be precluded from conducting applicant background screens prior to making a conditional offer of employment, as is already required of employers.
- AI tools, without additional processes, would not be able to conduct the individualized assessments employers are required to do when withdrawing a conditional offer of employment due to a failed background screen.
Additionally, those who provide employers with AI tools or use AI tools on behalf of employers could be liable for violations of FEHA.
Employers should audit AI tools for potential adverse impact on applicants and employees and to ensure that AI tool content is otherwise consistent with an employer’s obligations under FEHA in anticipation of the proposed rules likely being adopted in some form later this year.
Action Items
- Review the proposed rules here.
- Audit AI tools for compliance with anti-discrimination laws.
Disclaimer: This document is designed to provide general information and guidance concerning employment-related issues. It is presented with the understanding that ManagEase is not engaged in rendering any legal opinions. If a legal opinion is needed, please contact the services of your own legal adviser. © 2024 ManagEase