Colorado: New Regulations Address the Use of AI in Employment Decisions
APPLIES TO All Employers with Employees in CO |
EFFECTIVE February 1, 2026 |
QUESTIONS? Contact HR On-Call |
Quick Look
|
Discussion
As a first of its kind law, Colorado’s SB 24-205 establishes statutory tort liability for AI algorithmic discrimination in employment decisions. While other jurisdictions have enacted laws regulating the use of AI technology by employers, Colorado is the first state that directly establishes a duty of reasonable care in the development and deployment of AI tools across hiring, employment and other consumer service sectors. SB 24-205 specifically requires employers who use AI tools to implement risk management policies, conduct impact assessments, and provide detailed notices to employees and applicants. The law is set to go into effect on February 1, 2026.
Colorado’s new law applies to machine-based algorithms that use inferential techniques to produce predictions, recommendations, decisions, or content more generally, specifically those that “make” or are a “substantial factor” in making, “a decision that has a material legal or similarly significant effect on the provision, denial, cost or terms” of any of hiring and/or employment in general, among other things. The law does not clearly define what constitutes a “substantial factor,” stating only that the AI tool falls within the scope of the law if it “assists in making” the decision at issue and is “capable of altering the outcome.”
Under the law, employers who build, modify or use covered AI tools owe a duty of care to all Colorado residents, to protect them from “any known or reasonably foreseeable risks” of AI-driven algorithmic discrimination. The law requires certain transparency, notice, analysis, and documentation requirements. Specifically, developers of covered AI tools must provide certain information to users including:
- A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the covered AI tool; and
- Documentation regarding the purpose of the tool, the data used to train the tool, limitations of the tool, intended benefits and uses of the tool, how the tool was evaluated for performance and mitigation of algorithmic discrimination, and any other documentation reasonably necessary to assist the user in understanding or monitoring the performance of the tool.
Based in part on the information they receive from developers, users or “deployers” of covered AI tools must implement a risk management policy and program to govern the use of the tool, specifying, among other things, the principles, processes and personnel who are responsible for identifying, documenting, and mitigating known or reasonably foreseeable risks of algorithmic discrimination. Deployers are also required to complete an impact assessment of the AI tool, at least annually and within 90 days after any intentional and substantial modification to the system. The law has several specific requirements for the impact analysis, including:
- The purpose, intended use cases, benefits and deployment context of the tool;
- A description of the categories of data the AI tool processes as inputs and the outputs the system produces;
- An analysis of whether the tool poses any known or reasonably foreseeable discrimination risks and any steps taken to mitigate those risks; and
- A description of the monitoring and user safeguards provided for the tool.
Impact assessments must be maintained for at least three years following deployment or modification of the tool. Employers using covered AI tools must conduct a review of the tool at least annually to ensure that it is not causing algorithmic discrimination. The law also imposes specific notice requirements: (1) a general notice published online of a summary of the AI tool and how the tool is managed for known or reasonably foreseeable risks, including the nature source and extent of the information collected and used by the tool; and (2) an additional notice to any Colorado resident who is subject to a consequential adverse decision, such as denial of employment, made by or with assistance from the tool.
In the adverse decision notice, the employer must provide: (1) the reason for the adverse decision; (b) the impact of the AI tool on the decision; (3) the data used by the tool in making or assisting with the decision; and (4) the sources of the data used by the tool. Employers must also provide the individual with an opportunity to “correct any incorrect personal data” that was used by the AI tool and an opportunity to appeal the adverse decision.
Employers should note that although Colorado’s governor signed the law, he did so with reservations. Governor Polis sent a letter to Colorado legislators encouraging them to reconsider and amend certain aspects of SB 24-205 before the law’s effective date. Specifically, the governor expressed concerns to the burden placed on businesses and the potential negative effect that the law could have on technology development. In light of these concerns, employers should continue to monitor any developments or future amendments to the law.
Action Items
- Implement policies and procedures addressing the use of covered AI tools in the workplace.
- Prepare for impact assessment and notice requirements.
- Have appropriate personnel trained on requirements and best practices when using covered AI tools in the workplace.
- Consult with legal counsel when developing or implementing AI tools for use in making consequential employment decisions.
Disclaimer: This document is designed to provide general information and guidance concerning employment-related issues. It is presented with the understanding that ManagEase is not engaged in rendering any legal opinions. If a legal opinion is needed, please contact the services of your own legal adviser. © 2024 ManagEase