close
close

Colorado Artificial Intelligence Act: 5 Things You Need to Know | Orrick, Herrington & Sutcliffe LLP

Colorado Artificial Intelligence Act: 5 Things You Need to Know | Orrick, Herrington & Sutcliffe LLP

Colorado has passed a unique artificial intelligence law that governs the development and use of artificial intelligence.

Here are five things to know about the Colorado AI Act in its current form—and how it could change before it goes into effect.

1. The framework of the law will evolve before implementation in 2026.

Although the AI ​​law won’t go into effect until February 2026, Colorado is already under increasing pressure to change the law due to concerns about unintended consequences for consumers and businesses.

Colorado Governor Jared Polis said in a letter that lawmakers plan to review the law “to ensure that the final regulatory framework protects consumers and supports Colorado’s leadership in the AI ​​sector.”

2. The law primarily applies to high-risk AI systems.

The law only applies to “high-risk artificial intelligence systems” or “any artificial intelligence system that, when deployed, makes or is a substantial factor in making a consequential decision.”

  • Artificial intelligence system: “(Any machine-based system that, for an explicit or implicit purpose, infers from the inputs the system receives how to generate output . . . that can affect physical or virtual environments.”
  • Consequential decision: “A decision that has a material legal or similarly significant effect on the provision or denial to a (Colorado resident) of, or the cost or terms of:
    • Educational enrollment or an educational opportunity.
    • Employment or a work opportunity.
    • A financial or credit service.
    • An essential government service, health care, housing, insurance or a legal service.”

Despite several exceptions for systems that perform limited procedural tasks or augment decision-making, these definitions can be interpreted broadly and apply to a wide range of technologies.

The governor’s letter makes clear that revisions to the law will refine definitions to ensure the law only regulates the riskiest systems.

As a result, the law in its final form will likely only apply to AI systems that actually influence decisions with a material legal or similarly significant effect on designated services of major importance.

3. Developers have a duty to avoid algorithmic discrimination.

The law applies to anyone doing business in Colorado who develops or intentionally and substantially modifies a high-risk artificial intelligence system. It requires them to exercise reasonable care to protect consumers from algorithmic discrimination.

Developers must make documentation available to implementers or other developers of the system. The documentation must disclose, among other things:

  • The purpose, intended benefits and reasonably foreseeable uses of the system.
  • The type of data used to train the system and the controls implemented in the training process.
  • The limitations of the system.
  • The evaluation performed on the system to address algorithmic discrimination.
  • The measures taken to mitigate the risks of algorithmic discrimination.
  • How the system should be used, not used, and controlled.
  • Any other information reasonably necessary to assist operators in complying with their obligations under the law.

In its current form, the law requires developers to proactively notify the Colorado Attorney General and known implementers/developers of algorithmic discrimination concerns. However, the Governor’s letter indicates an intent to move to a more traditional enforcement framework without mandatory proactive disclosures.

4. Implementers also have a duty to avoid algorithmic discrimination.

The law also requires anyone doing business in Colorado that uses a high-risk artificial intelligence system to exercise reasonable care to protect consumers from algorithmic discrimination related to such systems. Implementers must:

  • Implement a risk management policy and program to govern the deployment of the high-risk artificial intelligence system.
  • Conduct full impact assessments for the at-risk artificial intelligence system.

As enacted, the bill would have required law enforcement agencies to proactively notify the Colorado Attorney General of algorithmic discrimination. However, the governor’s letter indicates that Colorado plans to move to a more traditional enforcement framework without mandatory proactive disclosures.

Additionally, the letter says lawmakers plan to amend the law to focus regulation on developers of high-risk artificial intelligence systems rather than smaller companies that implement them. As a result, we may see slimmed-down implementation requirements or broader implementation exemptions in the final implemented regulatory framework.

5. The law provides consumer rights with regard to artificial intelligence systems.

Developers and operators must make a public statement to consumers summarizing the types of high-risk artificial intelligence systems they develop or use and how they mitigate the risks of algorithmic discrimination.

Implementers should also notify consumers when they use a high-risk artificial intelligence system to create a consistent system or when such a system is a substantial factor in making that decision. They must do this before the decision is made. They must also provide the consumer with information about the decision and, if available, the right to unsubscribe.

If a risky artificial intelligence system results in an unfavourable decision for a consumer, the operator must:

  • Make it known to the consumer:
    • The main reason or reasons for the decision.
    • The extent to which the system contributed to the decision.
    • The type of data processed by the system in making the decision and its sources.
  • Provide the opportunity to correct the data processed by the system to make the decision.
  • Provide the ability to appeal the decision and request a human review.

Finally, the law requires that any artificial intelligence system (whether high-risk or not) intended to communicate with consumers must be accompanied by a notice that the consumer is communicating with an artificial intelligence system.

What does this mean for your company?

While the final form of the Colorado Artificial Intelligence Act may differ from the version passed by the state legislature, businesses should begin preparing for substantive AI regulation by:

  • Developing an organizational framework for evaluating and managing AI-related risks.
  • When preparing reports and documentation for AI, the company develops an overview of how the systems were developed, how they should be used, and what measures have been taken to mitigate risks associated with their use.
  • Establishing a process for assessing risks and potential impacts of deploying third-party AI.
  • Expanding organizational procedures, including contracting and management procedures for third parties, to account for unique AI risks.