Troutman Pepper Locke is proud to sponsor the Government Investigations & Civil Litigation Institute’s 11th Annual Meeting in Santa Ana Pueblo, New Mexico. Ashley Taylor will be moderating the “Navigating State Jurisdictional Waters: State Attorneys General and Their Reach” session, Ghillaine Reid will be a panelist on the “AI Ethics: Guardrails and Grey Areas in

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law. The bill will go into effect on January 1, 2026. The act builds upon the recommendations found in the “California Report on Frontier AI Policy,” which was released to the public on June 17, 2025. This report detailed key principles to guide the legislation drafting process, including grounding AI policy in empirical research and providing greater transparency into AI systems. Given that California is home to 32 of the top 50 AI companies worldwide, the state dominates the AI industry. It is no surprise that California is the first state to create rules promoting safety, transparency, and incident reporting for frontier models. This new act is expected to set the stage for similar AI legislation across the U.S.

In this episode of Regulatory Oversight, Stephen Piepgrass is joined by Zack Condry, co-founder of Watermark Strategies, to analyze the evolving landscape of crisis management and the critical role of strategic communication in navigating complex issues. They explore effective communication strategies, public relations, and the evolving role of AI in managing crises. Zack shares insights from his extensive experience in corporate communications and public affairs, from his background managing political campaigns to his current work developing digital strategies for high-profile clients.

In this crossover episode of The Good Bot and Regulatory Oversight, Brett Mason, Gene Fishel, and Chris Carlson discuss the latest state laws targeting AI, especially in health care. They break down new legislation in Colorado, Utah, California, and Texas, highlighting differences in scope and enforcement. They also cover how state attorneys general are using consumer protection and anti-discrimination laws to regulate AI, even in states without AI-specific statutes.

What Happened

Last week, Colorado lawmakers held a special session that culminated in a decision to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline beyond its original February 2026 start date. That delay gives businesses a brief window to prepare, but the law remains in effect, requiring companies to build governance programs and perform regular impact assessments of high-risk AI systems.

On June 18, Arizona Attorney General (AG) Kris Mayes, in partnership with the Better Business Bureau (BBB), announced a new consumer educational campaign aimed at teaching Arizona residents how to avoid falling victim to a variety of scams. The education campaign targets consumers lacking awareness of such scams, especially senior citizens. The series of video public service announcements (PSAs) aims to enable Arizona consumers to spot and avoid scams on their own. According to the FBI Internet Crime Complaint Center, Arizona residents lost approximately $392 million due to consumer fraud in 2024. The AG’s office received almost 22,000 consumer complaints, answered more than 28,000 phone calls, and reviewed more than 23,000 emails from consumers regarding potential fraud during this time.

On July 23, President Trump announced efforts to position the U.S. at the forefront of the global artificial intelligence (AI) race. “Winning the AI Race: America’s AI Action Plan” details how the federal government will advance the AI industry and was issued pursuant to the president’s January 23 Executive Order (EO) 14179, “Removing Barriers to American Leadership in Artificial Intelligence.”

Introduction

The United States is navigating a new era of regulatory oversight and the balance of power between federal and state regulators following the 2024 election cycle. As federal agencies retreat from and/or realign their regulatory enforcement priorities, state attorneys general (AGs) are increasingly taking the lead in policing companies — especially those that are consumer-facing — bridging perceived gaps left by shifting federal priorities, and in some cases, emboldened to expand regulatory enforcement into relatively new arenas.

One of many provisions in the “One Big Beautiful Bill Act,” passed by the U.S. House of Representatives, would place a 10-year “temporary pause” on states’ ability to regulate artificial intelligence (AI). Initially called a moratorium, Senate Republicans changed the characterization of the prohibition to ensure the provision’s passage during the reconciliation process. The changes were at least partially successful, as the proposed “temporary pause” overcame a procedural hurdle when the Senate parliamentarian concluded that it satisfies the “Byrd Rule” and may remain in the bill. The bill now heads to the Senate floor. If enacted, the temporary pause would mark the most significant federal action (or inaction) related to AI.

In this episode of the Regulatory Oversight podcast, Stephen Piepgrass welcomes David Navetta, Lauren Geiser, and Dan Waltz to discuss the $51.75 million nationwide class settlement involving Clearview AI and its broader implications. The conversation focuses on Clearview AI’s facial recognition software, which has sparked controversy due to its use of publicly available images to generate biometric data.