On December 17, New Jersey announced its adoption of what its Attorney General is calling the “most comprehensive state-level disparate impact regulations in the country.” Effective December 15, 2025, the Division on Civil Rights’ (DCR) new rules under the New Jersey Law Against Discrimination (LAD) codify guidance on disparate impact discrimination across housing, lending, employment, places of public accommodation, and contracting.

Attorney General Matthew Platkin and DCR Director Yolanda Melville framed the rules as reinforcing New Jersey’s “nation‑leading civil rights protections” at a time when the federal government is reconsidering disparate impact protections. The press release accompanying the rules stresses that the rules “do not create additional liability under the LAD,” but instead clarify how DCR understands disparate impact claims are analyzed under existing law.

The new rules are available at this link, and DCR’s responses to comments are available here.

Core Legal Standard and Burden-Shifting Framework

The rules codify disparate impact liability in a way that generally will be familiar to those who have encountered it under federal laws. They begin from the principle that a neutral practice can be unlawful if it disproportionately harms members of a protected group. According to the rules, disparate impact discrimination occurs under the LAD when a facially neutral practice or policy results in a “disproportionately negative effect” on a protected class, even absent any discriminatory intent, unless it is shown that such practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest and there is no less discriminatory alternative that would achieve the same interest.

Protected characteristics under the LAD include, among others, race, national origin, religion, gender, gender identity and expression, disability, sexual orientation, age (in certain contexts), and source of lawful income.

Importantly, as under federal law, the rules adopt a structured burden‑shifting approach. In employment, public accommodations, and contracting, the complainant must first establish disparate impact. The covered entity then must justify the practice as necessary to achieve a substantial, legitimate, nondiscriminatory interest. If the covered entity meets that burden, the complainant may still prevail by showing that a less discriminatory alternative would serve the same interest.

In housing and housing financial assistance, the rules’ framework is even more protective of complainants, which DCR asserts is consistent with New Jersey appellate precedent. Once a complainant shows disparate impact, the covered entity bears the burden of proving both that the policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest and that no less discriminatory alternative exists. The rules expressly require “empirical” evidence — not speculation — to satisfy these burdens, though anecdotal evidence may supplement empirical proof.

During rulemaking, DCR removed the phrase “equally effective” from the “less discriminatory alternative” standard, explaining that requiring alternatives to be “equally effective” in every respect could improperly raise the bar for complainants and fall below federal fair housing protections. As adopted, less discriminatory alternatives need not be identical in cost or efficiency, so long as they still further the stated interest. New Jersey expects entities to consider alternatives that may be somewhat more burdensome if they substantially reduce discriminatory impact.

Coverage Across Housing, Lending, Employment, Public Accommodations, and Contracting

The new rules are sweeping in sectoral scope. They apply to “covered entities,” defined broadly to include housing providers, employers, labor organizations, employment agencies, real estate professionals, places of public accommodation, contractors, and any person required to comply with the LAD.

Across these sectors, the rules focus on policies that operate as blanket screens — such as automatic denials based on criminal history, credit scores, income cutoffs, geography, or rigid dress/discipline standards — and treat them as legally risky when they disproportionately exclude protected groups. The agency instead signals a strong preference for individualized, context‑specific assessments as less discriminatory ways to achieve legitimate business and governmental objectives.

AI, Automated Decision Tools, and the Civil Rights and Technology Initiative

A prominent feature of the rulemaking is its treatment of artificial intelligence and other automated decision‑making tools. DCR makes clear that automated employment decision tools, including resume screening algorithms, “online application technology” that filters applicants by scheduling availability, and facial analysis or video‑based tools, are subject to the same disparate impact analysis as any human‑driven process.

The rules impose an affirmative obligation on entities that use third‑party systems. For example, if a covered entity’s practice or policy that has a disparate impact relies on a vendor’s product, the covered entity must take reasonable steps to ensure that the vendor’s product complies with the LAD and the rules. The proposed rules on this topic were a source of confusion, and DCR ultimately deleted language that some read as creating a safe harbor for entities that take such steps. In the adoption notice, DCR emphasizes that “reasonable steps” to ensure compliance are not a complete defense “if they do not prevent or mitigate disparate impact discrimination.”

In other words, if a tool in fact produces unlawful disparate impact, the user may still be liable, even if the user did not build the tool itself and even if the user took reasonable (but ineffectual) steps to ensure compliance. The prospect of disparate impact liability arising from the use of a third party’s products or services will be of interest not only to entities covered by DCR’s rules but also to vendors of software and other products or services.

These AI-related provisions are framed as part of DCR’s broader Civil Rights and Technology Initiative, which has already produced separate guidance on algorithmic discrimination. The rules reinforce that automated decision systems are a priority enforcement area for New Jersey, particularly where they are used at scale in housing, credit, hiring, or other high‑impact decisions. In addition, although the rules address such technologies in the employment context in particular, the rules’ discussion of the topic sheds light on how DCR will approach similar technologies in other contexts, such as housing and lending.

Interaction with Federal Laws and Regulations

Throughout the rulemaking, DCR addressed whether and how its standards align with federal anti‑discrimination frameworks. Aside from DCR’s explicit adoption of a more complainant-friendly standard in the housing context, DCR generally characterizes its rules as consistent with federal disparate impact standards as they existed at the end of the Biden Administration. Instability in the federal approach to disparate impact discrimination may prove to be a source of uncertainty regarding New Jersey’s application of its parallel standards.

Our Take

This rulemaking has been in process for some time and was anticipated. We find it noteworthy that the rule adopts the federal standard for disparate impact in significant respects, but the statement that a less discriminatory alternative need not be “equally effective” is a source of concern for covered entities, as is the burden-shifting framework adopted for housing and housing financial assistance. It also seems significant that, although DCR adopted specific rules for applying disparate impact in a variety of contexts, lending was given relatively little attention in the rule. We’ll be watching to see how this rule plays out in practice in New Jersey and reporting on any significant developments.