As the use of artificial intelligence (AI) becomes more prevalent in day-to-day life and in the legal field, in particular, thorny questions arise regarding the implications of that use. One such question is whether exchanges with a publicly available generative AI platform in connection with pending litigation are protected by the attorney-client privilege or the work product doctrine. In a matter of first impression nationwide, U.S. District Judge Jed S. Rakoff of the Southern District of New York answered that question in the negative and required a defendant to provide the prosecution documents memorializing litigation-related communications with a generative AI platform.[1] Applying traditional principles governing the attorney-client privilege and the work product doctrine, the court reasoned that the communications did not involve an attorney-client relationship, were not confidential, were not made for the purpose of obtaining legal advice, and did not reflect an attorney’s trial strategy.[2] The ruling will likely impact whether legal protections are afforded to AI communications, prompts, and output in both litigation and regulatory inquiries, including state attorneys general (AG) investigations.

On February 24, the New Jersey State Senate unanimously confirmed the appointment of Jennifer Davenport to serve as New Jersey’s attorney general (AG). Davenport (whose nomination we covered here) has been serving in an acting capacity since Governor Mikie Sherrill took office in January.

In this special crossover episode of Regulatory Oversight and The Consumer Finance Podcast, Chris Willis is joined by colleagues Lori Sommerfield and Matthew Berns to discuss New Jersey’s sweeping new disparate impact regulations under the Law Against Discrimination. They break down one of the most comprehensive state-level disparate impact rules in the U.S., the contrasts with traditional federal standards, and implications for enforcement in financial services. The discussion dives into credit scores, underwriting models, AI and automated decision-making tools, and the difference between New Jersey’s approach and the Trump administration’s effort to scale back disparate impact at the federal level, offering practical takeaways for lenders and other covered entities navigating this shifting landscape.

In a pair of recent submissions to the Federal Communications Commission (FCC), a bipartisan coalition including more than 20 state attorneys general (AG) opposed action by the FCC to preempt state and local laws relating to artificial intelligence (AI). The coalition’s comments reflect persistent concerns among AGs about how businesses use AI when interacting with their residents, even as some federal policymakers support limiting states’ ability to address those concerns.

State attorneys general (AGs) are among the most active and influential regulators in the U.S., using broad statutory authority, political visibility, and growing technical knowledge to shape policy and enforcement across sectors. In 2025, they asserted their authority to shape the legal and regulatory environment across the U.S. through aggressive and coordinated action. Despite changing

In this episode of Regulatory Oversight, host Ashley Taylor is joined by Colorado Senate Majority Leader Robert Rodriguez and Troutman Pepper Locke Privacy + Cyber partner David Stauss for an in‑depth discussion of the Colorado AI Act — widely viewed as the nation’s first comprehensive legislative framework focused on high‑risk AI systems and algorithmic discrimination. Senator Rodriguez explains how Colorado’s work on consumer privacy laid the groundwork for AI regulation and walks through the origins, goals, and core provisions of the Act, including its emphasis on transparency, risk assessments, and protecting consumers in sectors such as employment, housing, health care, education, finance, and government services.

On December 11, 2025, New York Governor Kathy Hochul signed into law two bills governing the use of artificial intelligence (AI) in advertising. The governor’s office described the bills as “first-in-the-nation legislation to protect consumers and boost AI transparency in the film industry.” Both bills unanimously passed through the New York Legislature.

On December 17, New Jersey announced its adoption of what its Attorney General is calling the “most comprehensive state-level disparate impact regulations in the country.” Effective December 15, 2025, the Division on Civil Rights’ (DCR) new rules under the New Jersey Law Against Discrimination (LAD) codify guidance on disparate impact discrimination across housing, lending, employment, places of public accommodation, and contracting.

On December 11, President Donald Trump signed an executive order (EO) that establishes a national artificial intelligence (AI) regulatory framework and attempts to preempt enforcement of state AI laws. Titled “Ensuring a National Policy Framework for Artificial Intelligence,” the EO states that “[i]t is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” This latest effort follows bipartisan opposition in Congress and among state attorneys general (AGs) to previous legislative attempts this year to supersede state AI laws. While the order seeks to minimize a burdensome AI regulatory patchwork, compliance will remain complex given various state enforcement tools.