Ashley Taylor, Clayton Friedman, Gene Fishel, and Jay Myers of Troutman Pepper Locke LLP discuss actions by state attorneys general under existing and AI-specific laws to address misuse and legal violations of AI.
Reviewing, analyzing, and navigating compliance, enforcement, investigation, and litigation developments and trends in the state and federal regulatory landscape
Ashley Taylor, Clayton Friedman, Gene Fishel, and Jay Myers of Troutman Pepper Locke LLP discuss actions by state attorneys general under existing and AI-specific laws to address misuse and legal violations of AI.
Introduction
On Thursday, March 20, a federal judge in the Northern District of Illinois granted final approval to a settlement agreement under which Clearview AI (Clearview) agreed to pay an estimated $51.75 million to a nationwide class if one of several contingencies takes place. This approved settlement agreement resolves In Re: Clearview AI, Inc. Consumer Privacy Litigation, No. 1:21-cv-00135 (N.D. Ill.), a multidistrict suit alleging that the company’s automatic collection, storage, and use of biometric data violated various privacy laws, including Illinois’ Biometric Information Privacy Act (BIPA). The unorthodox settlement not only preserves Clearview’s business model, but may also insulate Clearview from subsequent or parallel regulatory investigations without requiring the company to jeopardize the liquidity necessary for continued growth. Ultimately, this settlement seems to represent a good outcome for the company, especially in light of the fact that that it was achieved over the objections from 23 state attorneys general (AG). U.S. District Judge Sharon Johnson Coleman stated that the settlement is fair, reasonable, and adequate.
METRC, Inc., the predominant provider of seed-to-sale tracking software used by state regulatory bodies overseeing legal cannabis markets across the U.S., faces serious allegations detailed in a recent lawsuit filed in Oregon. The lawsuit, brought by a former executive at METRC, accuses the company of whistleblower retaliation and wrongful termination under Oregon law. Central to the plaintiff’s complaint are allegations that METRC knowingly ignored substantial compliance violations within its tracking systems in California, potentially facilitating illegal diversion of cannabis products. The litigation raises critical concerns for cannabis regulatory compliance, not only in Oregon and California but also in the 25 other jurisdictions that rely on METRC’s systems.
On February 4, the Office of the Minnesota Attorney General (AG) released its second Report on Emerging Technology and Its Effect on Youth Well-Being, outlining the effects young Minnesota residents allegedly experience from using social media and artificial intelligence (AI). The report highlights alleged adverse effects that technology platforms have on minors and claims that specific design choices exacerbate these issues.
State attorneys general (AGs) continue to play a pivotal role as innovators, shaping the regulatory environment by leveraging their expertise and resources to influence policy and practice. The public-facing nature of AG offices across the U.S. compels them to respond to constituent concerns on abbreviated timetables. This political sensitivity, combined with the AGs’ authority to address both local and national issues, underscores their significant influence in the current regulatory environment.
In a recent interview, Karen White, the executive director of the Attorney General Alliance (AGA), discussed the organization’s impactful partnership with PBS, its involvement in the Bipartisan Leadership Project, and its proactive stance on artificial intelligence (AI). Originally a regional group, the AGA has grown into a significant force addressing complex issues through bipartisan collaboration and innovative partnerships.
Published in Law360 on January 22, 2025. © Copyright 2025, Portfolio Media, Inc., publisher of Law360. Reprinted here with permission.
In the first installment of this two-part article, state attorneys general across the U.S. took bold action in 2024 to address what they perceived as unlawful activities by corporations in several areas, including privacy and data security, financial transparency, children’s internet safety, and other overall consumer protection claims.
Missouri’s attorney general (AG) announced on X.com (formerly Twitter) that he is “issuing a rule requiring Big Tech to guarantee algorithmic choice for social media users.” [X.com post (January 17, 2025, roughly 3:35 p.m. EST)] He intends to use his authority “under consumer protection law,” known as the Missouri Merchandising Practices Act in that state, “to ensure Big Tech companies are transparent about the algorithms they use and offer consumers the option to select alternatives.” [x.com post] The Missouri AG touts this rule as the “first of its kind” in an “effort to protect free speech and safeguard consumers from censorship.” [Press release]
Join Troutman Pepper Locke Partner Brett Mason for a podcast series analyzing the intersection of artificial intelligence (AI), health care, and the law.
On January 9, New Jersey Attorney General (AG) Matthew J. Platkin and the Division on Civil Rights (DCR) launched a new Civil Rights and Technology Initiative aimed at addressing the potential for discrimination and bias associated with artificial intelligence (AI) and other decision-making technologies. The announcement is one of many recent examples of AG’s leading the development of AI regulation. The New Jersey initiative is informed by recommendations from Governor Phil Murphy’s Artificial Intelligence Task Force, which emphasized the need for public education on bias and discrimination related to AI deployment.
In addition to cookies that are necessary for website operation, this website uses cookies and other tracking tools for various purposes, including to provide enhanced functionality and measure website performance. To learn more about our information practices, please visit our Global Privacy Notice.