On May 17, 2024, Colorado Governor Jared Polis signed into law Senate Bill 24-205, the Colorado Artificial Intelligence (AI) Act, making Colorado the first U.S. state to enact comprehensive legislation regulating the use and development of AI systems. The act is designed to regulate the private-sector use of AI systems, particularly addressing the risk of algorithmic discrimination arising from the use of so-called “high-risk AI systems.” The law will take effect on February 1, 2026, and the Colorado attorney general (AG) has exclusive enforcement authority.
Overview
The Colorado AI Act regulates “developers” (i.e. entities or individuals who create or substantially modify AI systems) and “deployers” (i.e. entities or individuals who use AI systems to make decisions or assist in decision-making) who develop or deploy “high-risk” AI systems. An AI system is considered “high-risk” if it “makes, or is a substantial factor in making, a consequential decision.” In turn, a “consequential decision” is any decision that can significantly impact an individual’s legal or economic interests, such as decisions related to employment, housing, credit, and insurance.
The legislative impetus for the act is the concern that consequential decisions, when influenced or driven by AI systems, can potentially lead to “algorithmic discrimination.” The act defines algorithmic discrimination as a “condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals” on the basis of protected classifications. Accordingly, the act imposes various documentation, disclosure, and compliance obligations on developers and deployers that are intended to identify and prevent such discrimination.
Developer Obligations
Under the act, developers of high-risk AI systems are required to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In connection with this obligation, developers are also required to provide specific documentation to deployers or other developers of high-risk AI systems, including a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the system, and detailed information about the system’s training data, limitations, purpose, intended benefits, and uses. Developers must also provide additional documentation necessary to assist in understanding the outputs of the AI system and how to monitor algorithmic decisions for bias.
Deployer Obligations
Deployers are also subject to a duty of reasonable care to protect consumers from known or reasonably foreseeable risk of algorithmic discrimination. They are required to implement a risk management policy and program that is reasonable in light of certain government standards, the size and complexity of the deployer, the scope of the system, the sensitivity and volume of the data, and the sensitivity and volume of data processed.
Deployers must conduct annual impact assessments of the AI system or after any intentional and substantial modification. These assessments must provide a statement of purpose and intended use case, an analysis of the algorithmic discrimination risks, a description of the data types used for inputs and outputs, metrics used to evaluate the system, the transparency measures taken, and a description of post-deployment monitoring and user safeguards.
In addition, deployers are required to inform consumers that the deployer has deployed a high-risk AI system to make decisions; provide a statement of the purpose of the system and the nature of decisions it is making; provide information regarding the consumer’s requirement to opt-out of the processing of personal data concerning the consumer for purpose of profiling.
If a decision is adverse to the consumer, the deployer must provide the consumer with a statement disclosing the reasons for the decision and the data used to make the decision, an opportunity to correct any incorrect data, and an opportunity to appeal the decision. Importantly, the notices must be provided directly to the consumer, in plain language, in a manner accessible to disabled individuals.
The AG’s Role
Both developers and deployers are required to disclose to the AG any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of a high-risk AI system. This disclosure is mandatory and must occur within 90 days when a developer or deployer: (1) discovers that the system has been deployed and has caused or is likely to have caused algorithmic discrimination; or (2) receives a credible report indicating such an occurrence.
The AG may require developers and deployers to provide a general statement describing the reasonably foreseeable and potentially harmful uses of the high-risk AI system. While making these disclosures, developers and deployers can designate the information as proprietary or a trade secret. Importantly, any information subject to attorney-client privilege or work-product protection is not considered waived upon disclosure.
Finally, the act grants the AG exclusive enforcement authority. A violation of the act is considered an unfair trade practice under Colorado’s Consumer Protection Act, which could lead to legal repercussions. The AG has the power to seek injunctive relief, an assurance of discontinuance, damages, and civil penalties of up to $20,000 per violation. The AG can also seek any other relief necessary to ensure compliance with the act.
Why It Matters
The Colorado AI Act is a pioneering piece of legislation, making Colorado the first U.S. state to enact a comprehensive law regulating the use and development of AI systems. This is significant as it sets a precedent for other states and potentially for federal legislation, thereby shaping the future of AI regulation.
With the law set to take effect on February 1, 2026, developers and deployers of AI systems have less than two years to ensure compliance with its requirements. Given the technical complexity of how AI models function, compliance may be challenging. Moreover, the process of auditing AI systems for bias can be resource-intensive. As such, companies that develop or deploy high-risk AI systems should take a compliance-by-design approach when building AI models.
Troutman Pepper will continue to monitor developments and will provide updates as additional information becomes available.
More on Artificial Intelligence + the Future of Law.
Troutman Pepper State Attorneys General Team
Ashley Taylor – Co-leader and Firm Vice Chair Ashley is co-leader of the firm’s nationally ranked State Attorneys General practice, vice chair of the firm, and a partner in its Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group. He helps his clients navigate the complexities involved with multistate attorneys general investigations and enforcement actions, federal agency actions, and accompanying litigation. |
|
Clay Friedman – Co-leader Clayton is a partner in the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group and co-leader of the State Attorneys General practice, multidisciplinary teams with decades of experience crafting effective strategies to help deter or mitigate the risk of enforcement actions and litigation. |
|
Judy Jagdmann Judy is a partner in the firm’s Regulatory Investigations, Strategy and Enforcement (RISE) practice, based in the Richmond office. She brings experience serving as chair and commissioner of the Virginia State Corporate Commission (VSCC) from 2006 through 2022, which includes regulating the utilities, insurance, banking, and securities industries. She also served as Virginia’s attorney general from 2005-2006. |
|
Stephen Piepgrass Stephen leads the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group. He focuses his practice on enforcement actions, investigations, and litigation. Stephen primarily represents clients engaging with, or being investigated by, state attorneys general and other state or local governmental enforcement bodies, including the CFPB and FTC, as well as clients involved with litigation, with a particular focus on heavily regulated industries. |
|
Michael Yaghi Michael is a partner in the firm’s State Attorneys General and Regulatory Investigations, Strategy + Enforcement (RISE) Practice Groups, nationwide teams that advise clients on consumer protection enforcement matters and other regulatory issues. |
|
Samuel E. “Gene” Fishel Gene is a member of the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) practice, based in the Richmond office. He brings extensive regulatory experience, having most recently served as senior assistant attorney general and chief of the Computer Crime Section in the Office of the Attorney General of Virginia, and as special assistant U.S. attorney in the Eastern District of Virginia for 20 years. |
|
Tim Bado Tim is an associate in the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group, where he represents corporations and individuals facing potential civil and criminal exposure. Tim’s experience in government investigations, enforcement actions, and white-collar litigation spans a number of industries, including financial services, pharmaceutical, health care, and government contracting, among others. |
|
Chris Carlson Chris Carlson represents clients in regulatory, civil and criminal investigations and litigation. In his practice, Chris regularly employs his prior regulatory experience to benefit clients who are interacting with and being investigated by state attorneys general. |
|
Blake R. Christopher Blake collaborates with clients on matters related to government contracting, investigations, and disputes. His senior-level government experience generates valuable insights and strategies for clients across a variety of industries. |
|
Natalia Jacobo Natalia is an associate in the firm’s Regulatory Investigations, Strategy and Enforcement (RISE) practice. She focuses her practice on two primary areas: government contracting and state attorney general work. |
|
Namrata Kang Namrata (Nam) is an associate in the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group, based in the Washington, D.C. office. She routinely advises clients on a wide variety of state and federal regulatory matters, with a particular emphasis on state consumer protection laws relating to consumer financial services and marketing and advertising. |
|
Michael Lafleur Michael is an associate in the firm’s Regulatory Investigations, Strategy, and Enforcement Practice Group. Based out of the firm’s Boston office, Mike has deep experience in litigation, investigations, and other regulatory matters involving state-level regulators and state attorneys general. |
|
Susan Nikdel Susan is an associate in the firm’s Consumer Financial Services Practice Group, and focuses her practice on consumer financial services matters. She has defended several of the nation’s largest and most influential financial institutions in individual and class action litigation involving the Telephone Consumer Protection Act (TCPA), Fair Credit Reporting Act (FCRA), Fair Debt Collection Practices Act (FDCPA), and other consumer privacy statutes. |
|
John Sample John is an associate in the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group. He focuses his practice on a wide range of general and complex litigation matters, including shareholder disputes, fraud, products liability, breach of contract, and Biometric Information Privacy Act claims. |
|
Whitney Shephard Whitney is an associate in the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group. She represents clients facing state and federal regulatory investigations and enforcement actions, as well as related civil litigation. |
|
Trey Smith Trey is an associate in the firm’s Regulatory Investigations, Strategy + Enforcement Practice. He focuses his practice on helping financial institutions and consumer facing companies navigate regulatory investigations and resulting litigation. |
|
Daniel Waltz Daniel is a member of the firm’s Regulatory Investigations, Strategy + Enforcement (RISE) Practice Group and State Attorneys General team. He counsels clients in connection with navigating complex government investigations, regulatory compliance, and transactions, involving state and federal government contracting obligations. Drawing on his broad experience as a former assistant attorney general for the state of Illinois, Daniel is a problem solver both inside and outside the courtroom. |
|
Stephanie Kozol Stephanie is Troutman Pepper’s senior government relations manager in the state attorneys general department. |