California: Final Regulations for Automated Decision-Systems
APPLIES TO Employers with Employees in CA |
EFFECTIVE As Indicated |
QUESTIONS? Contact HR On-Call |
Quick Look
|
Discussion:
On March 21, 2025, the California Civil Rights Department (CRD) voted to approve final regulations titled “Employment Regulations Regarding Automated-Decision Systems,” which clarify that it is unlawful to use AI and automated decision-making tools to make employment-related decisions that discriminate against applicants or employees in violation of California laws. Key aspects of the final regulations are summarized below.
Definitions. The following key terms are defined under the final regulations:
- “Automated-Decision System” is defined as “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” Covered systems include a range of technological processes, including tests, games, or puzzles used to assess applicants or employees, processes for targeting job advertisements, screening resumes, processes to analyze “facial expression, word choice, and/or voice in online interviews,” or processes to “analyz[e] employee or applicant data acquired from third parties.” Such systems do not include typical software or programs such as word processors, spreadsheets, map navigation systems, web hosting, firewalls, and common security software, “provided that these technologies do not make a decision regarding an employment benefit.”
- “Agent” is defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity … including when such activities and decisions are conducted in whole or in part through the use of an automated decision system.” The final regulations consider an employer’s “agent” to be an “employer” under the Fair Employment and Housing Act (FEHA) regulations.
- “Automated-Decision System Data” means “[a]ny data used to develop or customize an automated-decision system for use by a particular employer or other covered entity.”
- “Artificial Intelligence” is defined as “[a] machine-based system that infers, from the input it receives, how to generate outputs,” which can include “predictions, content, recommendations, or decisions.”
- “Machine Learning” means the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
Unlawful Selection Criteria. The final regulations confirm that it is “unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected” by FEHA.
Pre-Employment Practices. The final regulations clarify that online application technologies and automated-decision systems that screen, rank, or prioritize applicants based on certain criteria may result in unlawful discrimination against individuals with protected characteristics, such as religious creed, disability, or medical condition, unless accommodations are provided. These systems, which may assess skills, reaction times, or analyze physical characteristics, must ensure reasonable accommodations to avoid discrimination based on race, national origin, gender, or other protected traits.
Criminal History Inquiries. California law requires employers to make an individualized assessment of an applicant’s criminal record to determine its relevance to the job before denying employment. The final regulations confirm that employers must continue to comply with these requirements even when using an automated system to consider criminal histories.
Medical Inquiries. The final regulations reaffirm that the rules against asking unlawful medical or psychological questions apply even when using an automated-decision system. This includes specifically, any puzzles or games administered by an automated-decision system that are “likely to elicit information about a disability.”
Third-Party Liability. The final regulations state that prohibitions on aiding and abetting unlawful employment practices apply to automated decision-making systems, potentially implicating third parties involved in their design or implementation. Evidence of anti-bias testing and efforts to avoid discrimination is relevant to claims of unlawful discrimination. However, the regulations do not establish third-party liability for the design, development, advertising, promotion, or sale of these systems.
The final regulations have been submitted to the California Office of Administrative Law for review and approval. Once the final regulations are approved by the Office of Administrative Law and published by the Secretary of State, they will likely become effective on July 1, 2025. Employers should continue to monitor their use of AI to assess compliance with applicable anti-discrimination laws and requirements.
Action Items
- Review use of AI and automated-decision systems for compliance with applicable anti-discrimination laws.
- Have appropriate personnel trained on the proper use of automated-decision systems for making consequential employment decisions.
- Consult with legal counsel when developing or implementing new AI technologies or automated-decision systems in the workplace.
Disclaimer: This document is designed to provide general information and guidance concerning employment-related issues. It is presented with the understanding that ManagEase is not engaged in rendering any legal opinions. If a legal opinion is needed, please contact the services of your own legal adviser. © 2025 ManagEase