On June 2, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act, (TX AI Act or bill) which heads to the governor for his signature or veto. The bill will take effect January 1, 2026, if the governor signs it into law. It is the most comprehensive piece of AI governance legislation to pass a state legislature to date. If enacted, Texas will become the fourth state after Colorado, Utah, and California to pass AI-specific legislation.
The bill has emerged at a pivotal moment after the U.S. House of Representatives passed a sweeping 10-year federal moratorium on state regulation of AI systems that would preempt enforcement of existing state AI laws and essentially nullify passage of future such laws. Notably, 40 state attorneys general (AG) sent a bipartisan letter to federal legislators opposing the moratorium. This federal-state tension makes Texas’ approach particularly significant.
Scope of the Act
Texas’ bill applies to developers and deployers of any “artificial intelligence system,” which is defined as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” The bill is potentially broader in scope than Colorado and Utah’s regulation of only “high risk” AI systems.
The bill mandates certain disclosures concerning the development and deployment of AI systems. For example, providers of health care services and treatments who utilize AI in their practice must disclose to patients that they are using such systems. The bill also includes prohibitions when it comes to the development or deployment of an AI system that causes harm to another person, encourages one to inflict self-harm (including suicide), or that engages in criminal activity.
Further, it will be illegal to develop or deploy AI that “infringes, restricts, or impairs” any rights guaranteed under the U.S. Constitution, or discriminates against a person based on protected characteristics such as race, color, sex, age, or disability. While these mandates apply equally to private and public sectors alike, the discrimination prohibitions do not apply to insurance companies or financial institutions so long as they are in compliance with their governing, industry-specific regulations. Finally, one cannot develop or deploy AI solely for creating deep fake sexually explicit videos or for producing child pornography, which would carry criminal penalties under applicable state law.
Several bill provisions also impose requirements solely on state and local government use of AI systems. Specifically, governments cannot use AI for social scoring as part of the disbursement or denial of benefits. And they cannot develop or deploy AI systems that capture individuals’ biometric data. State and local governments are also required to disclose when they deploy AI systems that interact with consumers.
Regulatory and Enforcement Framework
The Texas AG will have exclusive enforcement powers, including the ability to issue civil investigative demands to obtain training data, purpose documents, and metrics related to AI systems. If a person is deemed in violation of the statute, the AG must submit a notice to the party, who then has a 60-day “cure” period in which to remedy the violation. Texas state agencies would also have the ability to enforce the statute against persons and companies within their jurisdictions. The bill prescribes civil penalties between $10,000 to $12,000 for curable violations, between $80,000 and $200,000 for uncurable violations, and between $2,000 to $40,000 each day for continuing violations. The monetary penalties are in addition to potential injunctive relief.
The legislation also creates a Texas AI “council” under the Texas Department of Information Resources. The council is charged with ensuring that AI systems develop and deployed in Texas operate in the best interests of Texas citizens, evaluating laws and regulations associated with AI for improvement, improving the efficiency of the government through the deployment of AI systems, advising state and local government and the legislature concerning AI, and developing reports and coordinating with other regulators on issues related to AI systems, among other things. The council will consist of seven individuals, three appointed by the governor, two appointed by the lieutenant governor, and two members appointed by the speaker of the house. It is also empowered to establish an advisory board composed of experts from the public who can assist in evaluating technical, ethical, and regulatory initiatives. Each councilperson serves a four-year term.
Finally, the law creates a Regulatory Sandbox Program within the Texas Department of Information Resources, allowing companies to develop and test innovative AI systems in a monitored environment, free from regulatory scrutiny and civil penalty. The purpose of the program is to encourage responsible deployment of AI systems.
Implications for Businesses
If enacted, the Texas AI Act would impose the most comprehensive set of governance regulations over the broadest swath of AI systems to date. Given the size of Texas, its business-friendly environment, and the concentration of tech companies within the state, the law would have a major impact nationally on the overall development and deployment of AI systems and related regulation and legislation.
The bill would also give Texas AG Ken Paxton another tool in his recent efforts aimed at privacy and consumer protection enforcement, including against AI systems. Paxton and other state AGs have utilized privacy, consumer protection, and discrimination laws to take action against AI development and deployment over the past year. Indeed, last September, Paxton announced a settlement with health care technology company Pieces Technology under the Texas Deceptive Trade Practices – Consumer Protection Act (DPTA). The enforcement action represented the first settlement under a state consumer protection act that involved generative AI.
In June of last year, Paxton announced the formation of a specialized team within his office’s Consumer Protection Division. This team is dedicated to the “aggressive enforcement of Texas privacy laws,” which include the Data Privacy and Security Act, the Identity Theft Enforcement and Protection Act, the Data Broker Law, the Biometric Identifier Act, the Deceptive Trade Practices Act (DPTA), as well as federal laws such as the Children’s Online Privacy Protection Act (COPPA) and HIPAA. In his announcement, Paxton highlighted the team as the largest of its kind in the U.S. The establishment of this unit coincided with the impending implementation of Texas’ comprehensive consumer privacy law, the Data Privacy and Security Act, which became effective on July 1, 2024. Paxton has initiated additional legal actions under these various statutes this year as part of this initiative, including privacy actions under the DPTA.
Takeaways
Companies using AI and doing business with consumers across multiple jurisdictions must recognize the rapid evolution of state-level regulations concerning the development and deployment of AI systems. Colorado, Utah, California, and now potentially Texas each have their own unique requirements that carry significant civil penalties for noncompliance. Texas’ novel, comprehensive approach is also likely to serve as a model for other states considering similar legislation. States will soon learn whether the federal government will preempt their efforts, and indeed state-initiated lawsuits may follow if Congress and the president enact the legislation. In the meantime, companies must remain ever vigilant.
Further, companies must be aware that state authorities can utilize “traditional” state laws against AI use. Businesses that advertise or publicize their use of AI products, or market such products to the public, must avoid overselling the capabilities of such products because they are subject to state consumer protection acts which prohibit false or misleading claims. Companies handling consumer personal identifying information must also take steps to ensure they are properly safeguarding it as regulators will scrutinize their practices under state privacy laws. Finally, businesses utilizing AI must ensure those systems are producing fair and unbiased results in light of state anti-discrimination statutes.
Ensuring compliance early in the lifecycle of an AI system is the best mitigation against ever-expanding regulatory risk. Companies wishing to develop or deploy AI systems should consult experienced outside counsel.