On June 12, a bipartisan group of 23 attorneys general wrote a letter to the chief counsel for the National Telecommunications and Information Administration (NTIA), recommending a risk-based approach to a regulatory framework for using and deploying AI technology. Driven by their “extensive experience enforcing data privacy and consumer protection laws,” the AGs noted that states, such as Colorado, California, Connecticut, Tennessee, and Virginia, already regulated AI through their respective state data protection and privacy laws.

A bipartisan coalition of 23 state attorney generals led by Virginia AG Jason Miyares recently went up in arms about a products liability ruling they believe will threaten state consumer protection laws. On May 30, the coalition filed an amicus brief in support of the plaintiffs’ claims in In Re: Fosamax (Alendronate Sodium) Products Liability Litigation, a consolidated case where hundreds of plaintiffs claimed to suffer femur fractures as a result of taking Merck drug Fosamax.

On June 8, a bipartisan coalition of 28 attorneys general issued a letter, supporting the Federal Communications Commission’s (FCC) proposal to close a “loophole” that currently allows lead generators to collect and sell personal consumer information to third parties using a “single consumer consent,” typically leading to multiple consumer solicitations (telemarketing calls and/or texts) beyond the scope of the original consent. At present, lead generators commonly will offer quotes for goods or services stipulated on receiving consent to share the consumers’ personal information with their “marketing partners” — aka third-party solicitors.

The Federal Communications Commission (FCC) recently amended requirements concerning artificial or prerecorded voice calls, effective July 20.[1] See Proposed 47 C.F.R. § 64.1200. Notably, the FCC amended requirements concerning prerecorded noncommercial and nontelemarketing commercial calls by (1) placing a cap on the number of calls to up to three calls within a consecutive 30-day period, unless the caller has obtained prior express consent, and (2) requiring callers to provide specific opt-out mechanisms.

On June 7, the Federal Trade Commission (FTC) announced a request for information (RFI) to gain additional insight into how it can optimize joint enforcement with state attorneys general (state AGs) to protect consumers from fraud. The announcement signals a growing trend of cooperation between the FTC and state AGs, which we have also seen between the Consumer Financial Protection Bureau (CFPB) and the state regulators.

On May 10, SoLo Funds, Inc. (Solo), one of the largest community lending platforms in the United States, entered into a settlement with the District of Columbia attorney general (OAG). The settlement resolves claims that the company’s lending practices violated D.C. usury law and constituted unfair, deceptive, and/or abusive acts under the D.C. Consumer Protection Procedures Act.

Many companies use machine learning algorithms and artificial intelligence (AI) to assist with employment decisions and tenant screening. In our final episode, Stephen Piepgrass and colleagues Ron Raether and Dave Gettings examine the use and impact of AI in background screening, including the potential risks companies may face with increased reliance on AI.