Key point: With a new governor taking office in New Jersey later this month, the fate of rules proposed last year to implement the New Jersey Data Privacy Act (NJDPA) will be decided by the incoming administration.
Reviewing, analyzing, and navigating compliance, enforcement, investigation, and litigation developments and trends in the state and federal regulatory landscape
Key point: With a new governor taking office in New Jersey later this month, the fate of rules proposed last year to implement the New Jersey Data Privacy Act (NJDPA) will be decided by the incoming administration.
In this episode of Regulatory Oversight, host Ashley Taylor is joined by Colorado Senate Majority Leader Robert Rodriguez and Troutman Pepper Locke Privacy + Cyber partner David Stauss for an in‑depth discussion of the Colorado AI Act — widely viewed as the nation’s first comprehensive legislative framework focused on high‑risk AI systems and algorithmic discrimination. Senator Rodriguez explains how Colorado’s work on consumer privacy laid the groundwork for AI regulation and walks through the origins, goals, and core provisions of the Act, including its emphasis on transparency, risk assessments, and protecting consumers in sectors such as employment, housing, health care, education, finance, and government services.
On December 11, 2025, New York Governor Kathy Hochul signed into law two bills governing the use of artificial intelligence (AI) in advertising. The governor’s office described the bills as “first-in-the-nation legislation to protect consumers and boost AI transparency in the film industry.” Both bills unanimously passed through the New York Legislature.
On December 17, New Jersey announced its adoption of what its Attorney General is calling the “most comprehensive state-level disparate impact regulations in the country.” Effective December 15, 2025, the Division on Civil Rights’ (DCR) new rules under the New Jersey Law Against Discrimination (LAD) codify guidance on disparate impact discrimination across housing, lending, employment, places of public accommodation, and contracting.
On December 11, President Donald Trump signed an executive order (EO) that establishes a national artificial intelligence (AI) regulatory framework and attempts to preempt enforcement of state AI laws. Titled “Ensuring a National Policy Framework for Artificial Intelligence,” the EO states that “[i]t is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” This latest effort follows bipartisan opposition in Congress and among state attorneys general (AGs) to previous legislative attempts this year to supersede state AI laws. While the order seeks to minimize a burdensome AI regulatory patchwork, compliance will remain complex given various state enforcement tools.
In this episode of our special 12 Days of Regulatory Insights podcast series, Ashley Taylor, co-leader of Troutman Pepper Locke’s State AG team, sits down with Privacy and Cyber chair Ron Raether to discuss how state attorneys general (AGs) are shaping the regulatory landscape for social media and the broader ad tech ecosystem.
On November 13, North Carolina Attorney General (AG) Jeff Jackson and Utah AG Derek Brown, along with the Attorney General Alliance, announced a task force in conjunction with generative artificial intelligence (AI) developers, including OpenAI and Microsoft, to identify and develop consumer safeguards within AI systems as these technologies continue to rapidly proliferate.
Troutman Pepper Locke is proud to sponsor the Government Investigations & Civil Litigation Institute’s 11th Annual Meeting in Santa Ana Pueblo, New Mexico. Ashley Taylor will be moderating the “Navigating State Jurisdictional Waters: State Attorneys General and Their Reach” session, Ghillaine Reid will be a panelist on the “AI Ethics: Guardrails and Grey Areas in…
On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law. The bill will go into effect on January 1, 2026. The act builds upon the recommendations found in the “California Report on Frontier AI Policy,” which was released to the public on June 17, 2025. This report detailed key principles to guide the legislation drafting process, including grounding AI policy in empirical research and providing greater transparency into AI systems. Given that California is home to 32 of the top 50 AI companies worldwide, the state dominates the AI industry. It is no surprise that California is the first state to create rules promoting safety, transparency, and incident reporting for frontier models. This new act is expected to set the stage for similar AI legislation across the U.S.
In this episode of Regulatory Oversight, Stephen Piepgrass is joined by Zack Condry, co-founder of Watermark Strategies, to analyze the evolving landscape of crisis management and the critical role of strategic communication in navigating complex issues. They explore effective communication strategies, public relations, and the evolving role of AI in managing crises. Zack shares insights from his extensive experience in corporate communications and public affairs, from his background managing political campaigns to his current work developing digital strategies for high-profile clients.
In addition to cookies that are necessary for website operation, this website uses cookies and other tracking tools for various purposes, including to provide enhanced functionality and measure website performance. To learn more about our information practices, please visit our Global Privacy Notice.