California: AI Regulations Adopted for Anti-Discrimination Employment Laws

APPLIES TO

All Employers with 5+ Employees in CA

EFFECTIVE

October 1, 2025

QUESTIONS?

Contact HR On-Call

(888) 378-2456

Quick Look

  • Regulations provide clarity on how existing anti-discrimination laws apply to the use of artificial intelligence in employment decisions.
  • Use of an automated-decision system may violate California law if it harms applicants or employees based on protected characteristics, such as gender, race, or disability.
  • Employers and covered entities must maintain employment records, including automated-decision data, for a minimum of four years.
  • Automated-decision system assessments, including tests, questions, or puzzle games that elicit information about a disability, may constitute an unlawful medical inquiry.
  • There are added definitions for key terms used in the regulations, such as “automated-decision system,” “agent,” and “proxy.”

Discussion

The California Civil Rights Council recently announced the adoption of regulations seeking to provide increased clarity on how existing antidiscrimination laws apply to the use of artificial intelligence in employment decisions. The regulations make it unlawful for employers or other covered entities to use automated-decision systems or selection criteria that discriminate against applicants or employees on a basis protected by the Fair Employment and Housing Act (FEHA). This includes prohibiting the use of automated-decision systems in unlawful employment activities, such as those involved in unlawful recruiting practices, pre-employment inquiries, application decision-making, and employee selection. Employers also cannot use automated-decisions systems in tests, questions, puzzles, games, or other challenges that are likely to elicit protected information, such as about a person’s disability, unless job-related for the position, consistent with business necessity, and there is no less discriminatory criteria that serves the employer’s goals. Employers may be required to provide religious or disability accommodations when using automated-decision systems to avoid prohibited discrimination.

 

“Automated-decision system” is defined as a computational process that makes or facilitates employment benefit decisions, including through artificial intelligence, machine-learning, algorithms, statistics, or other data processing techniques. This includes computer-based tests to: make predictive assessments; measure skills, abilities, or characteristics; measure a personality trait, aptitude, attitude or cultural fit; or screen, evaluate, categorize, or recommend applicants or employees. It also includes directing job advertisements to targeted groups; screening resumes for terms or patterns; analyzing facial expression, word choice, or voice; or analyzing data from third parties. Importantly, the definition excludes technologies that do not make decisions regarding employment benefits.

 

Application of these new provisions extends beyond those who may be considered a direct employer. Covered “agent[s]” of employers will now specifically include those acting on behalf of the employer while engaging in traditional employer activities, like applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including through the use of an automated decision system.

 

Importantly, the regulations prohibit these covered activities “subject to any available defense.” Relevant considerations to any defense are the use of anti-bias testing, the results of and response to the testing, and other efforts to avoid unlawful discrimination. This is a cue to employers that they must undertake appropriate steps to confirm that their use of AI does not violate anti-discrimination laws. Specifically, the quality, efficacy, recency, and scope of their proactive efforts to avoid unlawful discrimination will be heavily scrutinized in the event of a discrimination claim.

 

Finally, personnel records must now be kept for four years from the date the record was created or the date the personnel action involved took place, including job applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data, and other records dealing with employment practices or affecting employment benefits. If a complaint has been filed, all records must be retained until the later of: (1) the first date after the time for filing a civil action has expired; or (2) the first date after the complaint has been fully resolved (including any appeals).

 

Employers should begin preparing now by conducting anti-bias testing of their AI programs and working with vendors to verify that their use of AI programs is also compliant with the new FEHA regulations. Set anti-bias testing and other evaluation methods for periodic intervals and implement corresponding procedures to maintain these processes.

 

Action Items

  1. Review the regulations here.
  2. Conduct anti-bias testing of employment AI programs.
  3. Review employment AI programs of contracted vendors.
  4. Have appropriate personnel trained on the use of employment AI programs, including to be able to identify potential problems with the system.
  5. Update record retention policies and procedures.

Disclaimer: This document is designed to provide general information and guidance concerning employment-related issues. It is presented with the understanding that ManagEase is not engaged in rendering any legal opinions. If a legal opinion is needed, please contact the services of your own legal adviser. © 2025 ManagEase