On June 18, Arizona Attorney General (AG) Kris Mayes, in partnership with the Better Business Bureau (BBB), announced a new consumer educational campaign aimed at teaching Arizona residents how to avoid falling victim to a variety of scams. The education campaign targets consumers lacking awareness of such scams, especially senior citizens. The series of video public service announcements (PSAs) aims to enable Arizona consumers to spot and avoid scams on their own. According to the FBI Internet Crime Complaint Center, Arizona residents lost approximately $392 million due to consumer fraud in 2024. The AG’s office received almost 22,000 consumer complaints, answered more than 28,000 phone calls, and reviewed more than 23,000 emails from consumers regarding potential fraud during this time.

On July 23, President Trump announced efforts to position the U.S. at the forefront of the global artificial intelligence (AI) race. “Winning the AI Race: America’s AI Action Plan” details how the federal government will advance the AI industry and was issued pursuant to the president’s January 23 Executive Order (EO) 14179, “Removing Barriers to American Leadership in Artificial Intelligence.”

Introduction

The United States is navigating a new era of regulatory oversight and the balance of power between federal and state regulators following the 2024 election cycle. As federal agencies retreat from and/or realign their regulatory enforcement priorities, state attorneys general (AGs) are increasingly taking the lead in policing companies — especially those that are consumer-facing — bridging perceived gaps left by shifting federal priorities, and in some cases, emboldened to expand regulatory enforcement into relatively new arenas.

One of many provisions in the “One Big Beautiful Bill Act,” passed by the U.S. House of Representatives, would place a 10-year “temporary pause” on states’ ability to regulate artificial intelligence (AI). Initially called a moratorium, Senate Republicans changed the characterization of the prohibition to ensure the provision’s passage during the reconciliation process. The changes were at least partially successful, as the proposed “temporary pause” overcame a procedural hurdle when the Senate parliamentarian concluded that it satisfies the “Byrd Rule” and may remain in the bill. The bill now heads to the Senate floor. If enacted, the temporary pause would mark the most significant federal action (or inaction) related to AI.

In this episode of the Regulatory Oversight podcast, Stephen Piepgrass welcomes David Navetta, Lauren Geiser, and Dan Waltz to discuss the $51.75 million nationwide class settlement involving Clearview AI and its broader implications. The conversation focuses on Clearview AI’s facial recognition software, which has sparked controversy due to its use of publicly available images to generate biometric data.

On June 2, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act, (TX AI Act or bill) which heads to the governor for his signature or veto. The bill will take effect January 1, 2026, if the governor signs it into law. It is the most comprehensive piece of AI governance legislation to pass a state legislature to date. If enacted, Texas will become the fourth state after Colorado, Utah, and California to pass AI-specific legislation.

Introduction

On Thursday, March 20, a federal judge in the Northern District of Illinois granted final approval to a settlement agreement under which Clearview AI (Clearview) agreed to pay an estimated $51.75 million to a nationwide class if one of several contingencies takes place. This approved settlement agreement resolves In Re: Clearview AI, Inc. Consumer Privacy Litigation, No. 1:21-cv-00135 (N.D. Ill.), a multidistrict suit alleging that the company’s automatic collection, storage, and use of biometric data violated various privacy laws, including Illinois’ Biometric Information Privacy Act (BIPA). The unorthodox settlement not only preserves Clearview’s business model, but may also insulate Clearview from subsequent or parallel regulatory investigations without requiring the company to jeopardize the liquidity necessary for continued growth. Ultimately, this settlement seems to represent a good outcome for the company, especially in light of the fact that that it was achieved over the objections from 23 state attorneys general (AG). U.S. District Judge Sharon Johnson Coleman stated that the settlement is fair, reasonable, and adequate.

METRC, Inc., the predominant provider of seed-to-sale tracking software used by state regulatory bodies overseeing legal cannabis markets across the U.S., faces serious allegations detailed in a recent lawsuit filed in Oregon. The lawsuit, brought by a former executive at METRC, accuses the company of whistleblower retaliation and wrongful termination under Oregon law. Central to the plaintiff’s complaint are allegations that METRC knowingly ignored substantial compliance violations within its tracking systems in California, potentially facilitating illegal diversion of cannabis products. The litigation raises critical concerns for cannabis regulatory compliance, not only in Oregon and California but also in the 25 other jurisdictions that rely on METRC’s systems.

On February 4, the Office of the Minnesota Attorney General (AG) released its second Report on Emerging Technology and Its Effect on Youth Well-Being, outlining the effects young Minnesota residents allegedly experience from using social media and artificial intelligence (AI). The report highlights alleged adverse effects that technology platforms have on minors and claims that specific design choices exacerbate these issues.