On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence ("the Framework") outlining the administration’s recommended federal approach to AI regulation. The Framework is the most concrete statement yet of where the administration wants Congress to take federal AI policy. If Congress adopts this approach, it would reshape the US AI regulatory landscape, significantly affecting how companies navigate an already complex web of state, federal and global obligations.

The Framework is a follow-through from the December 11, 2025 executive order (EO) “Ensuring a National Policy Framework for Artificial Intelligence,” which we discussed in this December 12 alert. This new set of recommendations includes many elements that have previously been advocated for by the Trump administration, including preemption of some state AI laws, which was first included in early versions of the “One Big Beautiful Bill” legislation last year. The Framework encourages Congress to pass laws protecting children and their data in the AI context, but leaves states with the ability to enforce their generally applicable child protection laws. Importantly, the Framework specifically states that Congress should not preempt consumer protection laws that may apply to AI, which is one of the primary bases on which states are regulating the consumer-facing AI industry.

In other areas, the Framework generally supports the AI industry, promotes AI adoption through measures like regulatory sandboxes, and discourages new regulatory regimes or agencies specific to AI. The Framework also states that the Trump administration believes training on copyrighted material does not violate copyright laws but recommends leaving the courts to resolve the issue in specific cases and contexts. It generally encourages Congress to enable a nonmandatory licensing framework for training data.

What the Framework proposes

The Framework is incredibly broad in its scope and covers, at a high level, seven key priority areas for the administration:

  1. “Protecting children and empowering parents”
  2. “Safeguarding and strengthening American communities”
  3. “Respecting intellectual property rights and supporting creators”
  4. “Preventing censorship and protecting free speech”
  5. “Enabling innovation and ensuring American AI dominance”
  6. “Educating Americans and developing an AI-ready workforce”
  7. “Establishing a federal policy framework, preempting cumbersome state AI laws”
  1. Clear emphasis on child safety and age assurance

What the Framework says: The Framework calls for parental account controls, privacy-protective age assurance requirements for AI services likely to be accessed by minors, and product features designed to reduce risks of sexual exploitation and self-harm. Notably, the Framework also explicitly calls for Congress to preserve state authority to enforce general child protection laws, including prohibitions on AI-generated child sexual abuse material. This is a significant carve-out from the broader preemption push discussed below.

Why it matters: Positioning child safety first in the Framework is a clear statement of political intent and is likely one of the Framework’s bipartisan entry points. The recommendations echo the child safety direction of the proposed Kids Online Safety Act, which would also mandate default safety settings and parental tools for minor users. As we discussed in this March 5 alert, this focus on online child safety issues is one that has driven regulations in other jurisdictions, including the European Union, United Kingdom and Australia, to name a few.

  1. Infrastructure and data center energy costs

What the Framework says: The Framework recommends streamlined processes to allow for the continued development of data centers to support AI. Recognizing that this may result in increased costs to consumers, the Framework also proposes protecting residential ratepayers from electricity cost increases, noting the White House’s recent Ratepayer Protection Pledge, which secured commitments from companies to build, bring or buy their own power and cover the cost of grid upgrades. The Framework also calls for broader social investment in supporting the use of AI in communities and assisting law enforcement in combating fraud.

The administration has also made clear that there are national security considerations and concerns at play, suggesting that Congress should ensure federal agencies within the national security sphere “possess sufficient technical capacity” to understand and assess the risks around frontier AI models.

Why it matters: The Framework’s parallel emphasis on the need to support investment in AI infrastructure while translating AI into visible community benefits reflects growing public attention on the trade-offs inherent in AI. While the Framework stops short of calling for federal legislative protection of communities from the indirect impacts of AI infrastructure investment, it is significant that proposed preemption excludes state zoning laws, explicitly guarding state and local efforts.

  1. Copyright and intellectual property

What the Framework says: The Framework states that training AI models on copyrighted material does not violate copyright law, but it recommends that Congress leave the issue to the courts to resolve. The Framework also asks Congress to consider supporting nonmandatory collective licensing systems for use of copyrighted works to train AI. And it encourages Congress to monitor developing precedents related to the application of copyright law to AI, and consider further legislative action if there are gaps in the law or if additional protection is needed for copyright owners.

In addition, the Framework asks Congress to consider establishing new federal law that protects individuals from AI-generated “digital replicas” of their likenesses, while making clear exceptions for parody, satire, news reporting and other First Amendment-protected expression.

Why it matters: The issue of whether it is a “fair use” to train AI models on copyrighted material is the subject of dozens of active lawsuits across the US. The Framework stakes out the position that Congress should – for now – take a hands-off approach and leave it to the courts to decide how to apply fair use to AI. But it leaves open the prospect of legislative intervention if, for unspecified reasons, the need arises. 

Separately, the Framework tacitly supports Congress’s ongoing efforts to enact federal law directed at AI-generated digital replicas of a person’s voice, likeness or other identifiable attributes – essentially endorsing the thrust of bills like the NO FAKES Act, so long as the law is careful to preserve First Amendment expression. Currently, individuals must rely on a patchwork of state statutes and common law doctrines to protect against misappropriation of their likenesses. A federal law would create a more uniform national standard and could establish a basis for protecting celebrities and the public from malicious AI-generated impersonations.

  1. Speech and the First Amendment

What the Framework says: The Framework focuses on preventing the federal government “from coercing technology providers, including AI providers” to suppress or alter lawful expression based on partisan or ideological agendas. The Framework recommends Congress create a mechanism for “Americans to seek redress from the Federal Government” for claims related to agencies that “censor expression on” or “dictate” information provided by AI platforms.

Why it matters: Depending on if and how such a redress mechanism is codified, AI providers may consider carefully documenting all government communications regarding AI training, outputs, and moderation policies and procedures. AI providers may find themselves caught in litigation between individuals and federal agencies if moderation actions are perceived as directed by the government. This will require nimble balancing between effectuating compliance with a federal redress mechanism (if passed) and ensuring reliable AI guardrails.

  1. No new federal AI regulator, accelerating innovation

What the Framework says: The Framework explicitly instructs Congress not to create a new federal rulemaking body for AI. Instead, it encourages sector-specific oversight through existing regulatory bodies and industry-led standards. It also endorses the use of regulatory sandboxes. Additionally, the Framework supports expanding access to federal datasets in AI-ready formats.

Why it matters: This recommendation on the surface parallels broader deregulatory trends in the administration. However, the lack of a single authority with rulemaking or coordinating authority at the federal level risks continued, fragmented rulemaking and prioritization across the multiple existing federal government agencies with sector-specific interests.

  1. Workforce and skills development

What the Framework says: The Framework encourages Congress to ensure existing education programs and workforce training and support programs, including apprenticeships, affirmatively incorporate AI training; study trends in task-level workforce realignment driven by AI; and support land-grant institutions providing technical assistance, launch demonstration projects and AI youth development programs.

Why it matters: This recommendation implicitly recognizes the significant transformation AI is already having, and is likely to continue to have, on the American workforce.

  1. Preemption of state AI laws

What the Framework says: In line with its prior EO, the Framework calls for Congress to preempt state AI laws that impose “undue burdens.” In aiming to create a “minimally burdensome national standard,” rather than “fifty discordant ones,” the Framework calls on Congress to preserve three areas within state authority, while recommending three areas for federal governance.

Under the Framework’s approach, states would retain authority over:

  1. “Traditional police powers” to enforce laws of general applicability against AI developers and users. Importantly, this would include consumer protection laws and laws protecting children (as noted above) and anti-fraud measures.
  2. State zoning laws.
  3. Procurement by states for their own use of AI, such as law enforcement and public education.

Conversely, under the Framework’s approach, states would not be allowed to:

  1. Regulate AI development given AI’s “inherently interstate” dimensions and implications on foreign policy and national security.
  2. “Unduly burden” Americans’ use of lawful activity merely because it is AI-assisted.
  3. “Penalize AI developers” for unlawful conduct by third parties using their models.

Why it matters: This version of preemption in the Framework carves out significant authority that will remain with the states, and it is likely to give rise to complex questions about state law-making authority relating to AI safety, especially where that intersects with the training and development of AI models. Importantly, most states and their attorneys general are using existing consumer protection laws to investigate or enforce against AI developers, which the Framework leaves untouched.

What next?

The Framework is the clearest statement yet of the administration’s preferred end state for federal AI legislation: A national, innovation-oriented approach with targeted provisions on child safety, digital replicas, copyright-adjacent issues and workforce development; no new AI regulator; and preemption of certain state AI laws while leaving significant authority with states. This is a critical space to watch to see if Congress picks up this legislative proposal and moves ahead with it.

Pending legislative action, businesses should ensure they remain in compliance with existing, applicable state AI laws.

At Cooley, our cross-functional team of tech regulatory and enforcement practitioners leverages deep, hands-on experience helping businesses understand the developing AI policy landscape and navigate complex legal frameworks – including the expanse of US state AI laws and global frameworks, like the EU AI Act. Reach out for more information about how our horizon-scanning and regulatory risk management experience can support you and your team in managing AI risk.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as "Cooley"). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction, and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. When advising companies, our attorney-client relationship is with the company, not with any individual. This content may have been generated with the assistance of artificial intelligence (Al) in accordance with our Al Principles, may be considered Attorney Advertising and is subject to our legal notices.