White House Releases National Policy Framework for Artificial Intelligence

White House Releases National Policy Framework for Artificial Intelligence

Blog WilmerHale Privacy and Cybersecurity Law

On March 20, 2026, the White House released a National Policy Framework for Artificial Intelligence (the “Framework”), outlining policy recommendations to guide Congress in developing a unified federal approach to artificial intelligence (“AI”) legislation and regulation. The policy recommendations are consistent with what the Trump Administration has been signaling about its views for some time—that the proliferation of state AI laws is creating barriers to innovation, that there needs to be some national standard governing AI, and that there are particular areas in which Congress should act in order to protect individuals from potential individual and economic harms that could be caused by the continued adoption of AI technologies.

Last summer, the Administration urged Congress to adopt a temporary federal “moratorium” preempting certain state AI laws, but Congress ultimately declined to pursue that approach. Shortly thereafter, in December 2025, the Administration issued Executive Order 14365, Ensuring a National Policy Framework for Artificial Intelligence (often referred to as the “One Rule” Executive Order)—which we wrote about here—seeking to curtail the impact and continued proliferation of state AI regulation. Among other elements, the One Rule Executive Order directed the Department of Justice to establish an “AI Litigation Task Force” and instructed federal agencies to assess whether discretionary funding programs could be used to discourage certain types of state AI regulation. Notably, the One Rule Executive Order also committed the Administration to work with Congress to develop a federal legislative framework that would replace most categories of state-level AI laws with a unified national standard. The Framework released on March 20 follows up on that commitment by outlining the Administration’s preferred approach to federal AI legislation, providing directions on the key areas that any federal legislation should address and the categories of state AI laws that should be subject to preemption.

While the Framework spans a wide range of policy areas organized around six key objectives, the following takeaways are particularly salient for companies that develop, augment, deploy, or test AI systems:

  • Child Safety and Privacy Regulation: A significant focus of the Framework is protecting minors from AI harms and empowering parents to control their children’s digital environments. The Framework encourages Congress to adopt age-assurance requirements for AI platforms likely to be accessed by minors, tools that can be used by parents and guardians to manage privacy and engagement settings, and limits on data collection and online behavioral advertising. The Framework urges pursuing these goals while avoiding ambiguous content standards or open-ended liability regimes that could drive excessive litigation.
  • Copyright, Fair Use, and the Judiciary: The Framework acknowledges the judiciary’s authority to assess copyright and fair‑use questions related to AI training, while noting the Administration’s view that training AI models on copyrighted material does not violate copyright laws.
  • Antitrust Liability Exemption for Collective Licensing Negotiation: The Framework encourages Congress to consider enabling licensing or collective-rights frameworks that would allow intellectual property rights holders to collectively negotiate compensation from AI model developers without incurring antitrust liability.
  • Free Speech Protection: The Framework emphasizes limits on the federal government’s authority to coerce AI providers to restrict or alter content for partisan or ideological reasons. It also directs Congress to provide avenues for redress where such coercion occurs.
  • No New Federal Rulemaking Body: The Framework encourages relying on existing sector‑specific regulators and industry‑led standards rather than creating a new, centralized federal AI regulatory authority.
  • Federal Preemption of State AI Regulation: The Framework supports broad federal preemption of state AI laws that impose undue burdens, while preserving states’ traditional police powers to enforce laws of general applicability, especially to protect children, prevent fraud, and safeguard consumers. Additionally, the Framework calls for precluding states from regulating AI model development or imposing liability on AI developers for unlawful conduct by third parties using their systems.

The Framework is not a binding document and does not by itself impose new legal obligations or direct agencies to take specific regulatory actions. Instead, it outlines a series of recommended policy approaches for Congress to consider in drafting comprehensive federal AI legislation. Below, we analyze the Framework in greater detail in terms of its six key objectives as well as its guidance related to state‑law preemption.

I. Protecting Children and Empowering Parents

With respect to children’s privacy and safety, the Framework urges Congress to affirm that existing child privacy protections, including restrictions on data collection for model training and targeted advertising, apply to AI systems. It also encourages Congress to build on recent legislative efforts such as the TAKE IT DOWN Act, a bipartisan law enacted in May 2025 that criminalizes the nonconsensual publication of intimate digital deepfakes. In addition, the Framework calls on Congress to empower parents and guardians by providing them with tools to manage children’s online privacy settings, content exposure, and screen time.

Notably, for AI platforms and services likely to be accessed by minors, the Framework recommends that Congress establish commercially reasonable, privacy‑protective age‑assurance requirements (such as parental attestation) and require such platforms to implement features designed to reduce risks of harm to minors, including sexual exploitation and self‑harm.

At the same time, the Framework cautions Congress against adopting ambiguous content standards or open-ended liability regimes that could generate excessive litigation risk. Although it strongly favors federal preemption of state AI laws, the Framework underscores that Congress should not preempt states from enforcing generally applicable laws protecting children, such as prohibition of child sexual abuse material, where such content is generated using AI.

II. Safeguarding and Strengthening American Communities

The Framework links AI policy to broader community, infrastructure, and economic considerations, emphasizing that AI development should strengthen local communities and small businesses. It recommends that the expansion of AI infrastructure, such as the proliferation of data centers, not impose increased energy costs on residents. The Framework also urges Congress to augment existing law enforcement efforts to combat AI‑enabled fraud, impersonation, and scams that target vulnerable populations, such as seniors, positioning consumer protection as a core component of a national AI strategy.

III. Respecting Intellectual Property Rights and Supporting Creators

Regarding copyright, while the Framework states the Administration’s view that “training of AI models on copyrighted material does not violate copyright laws,” it acknowledges that reasonable arguments to the contrary exist and that resolution of this issue rests with the courts. The Framework advises Congress not to take legislative action that would influence judicial determinations regarding fair use.

At the same time, the Framework encourages Congress to consider enabling voluntary licensing or collective-rights frameworks that would allow rights holders to negotiate compensation from AI providers without incurring antitrust liability. It also calls for a federal regime to protect against the unauthorized distribution or commercial use of AI-generated digital replicas of individuals’ voice or likeness, while including safeguards for parody, satire, news reporting, and other First Amendment-protected expression.

IV. Preventing Censorship and Protecting Free Speech

The Framework emphasizes protecting free speech and limiting government coercion of technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological considerations. It recommends that Congress provide Americans with “effective means…to seek redress from the Federal Government” where federal agencies attempt to censor expression on AI platforms or dictate the information those systems provide.

V. Enabling Innovation and Ensuring American AI Dominance

The Framework advises Congress not to create a new federal AI regulator, instead favoring oversight through existing sector‑specific agencies with subject matter expertise and through industry‑led standards. To remove barriers to innovation and accelerate AI deployment, the Framework also recommends that Congress establish regulatory “sandboxes” for AI applications to support experimentation and help “unleash American ingenuity.”

VI. Educating Americans and Developing an AI-Ready Workforce

The Framework identifies workforce readiness as a critical pillar of U.S. AI leadership, highlighting the need to prepare Americans to participate in and benefit from AI-driven economic growth. It encourages congressional action to support education, training, and reskilling initiatives that expand AI literacy and technical expertise across the workforce.

VII. Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws

Finally, the Framework endorses federal preemption of state AI laws that the Administration regards as imposing undue burdens. The Framework urges, in particular, a single national standard rather than a “fragmented patchwork of state regulations.” At the same time, the Framework articulates limits on how far federal legislation should go in preempting state-level AI laws, acknowledging states’ authority to pass, maintain, and enforce laws of general applicability (including child protection and consumer protection laws) and to regulate state governments’ own use of AI.

Significantly, the Framework asserts that states should not be permitted to regulate AI in two key respects. First, the Framework takes the position that states should not regulate AI model development, which the Framework characterizes as inherently interstate in nature. Second, the Framework adopts the view that states should not penalize AI developers for unlawful conduct by third parties using their models. How policymakers will effectively limit such downstream liability—particularly in light of the growing prevalence of agentic AI—remains an open question.

The Framework’s endorsement of federal preemption is particularly notable when viewed against the current state-led approach to AI regulation. As discussed in our prior client alert on California’s Transparency in Frontier Artificial Intelligence Act, states have begun to fill the gap left by the absence of comprehensive federal legislation by imposing substantive obligations on AI developers, including transparency, risk management, and incident-reporting obligations. The Framework pushes back on this trend, signaling that federal policymakers may seek to limit, displace, or standardize state laws that impose obligations on AI developers or regulate frontier model development.

*          *          *

Overall, the Framework signals the Administration’s preferred direction for federal AI legislation, emphasizing a unified national standard, limits on regulatory fragmentation and litigation risk, and continued support for innovation. While not itself legally operative, the Framework suggests that Congress, with the Administration’s support, may move to preempt certain state AI laws while preserving targeted safeguards for children, consumers, and creators. Companies developing or deploying AI systems should closely monitor legislative developments and consider how potential federal AI legislation could affect compliance and risk management, including with respect to existing state-level regulatory schemes.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link. (The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.