On December 17, the Senate voted to approve the National Defense Authorization Act (NDAA), sending it to the President’s desk for approval. At more than 3,000 pages, the must-pass piece of legislation sets out the nation’s defense policy agenda for 2026, authorizing $8 billion in critical defense spending across a range of programs.
The NDAA places a predictably heavy emphasis on artificial intelligence (AI), including specifically the need for rapid AI integration and coordination with industry to sustain America’s warfighting edge and intelligence capabilities. But unlike President Trump’s AI Action Plan, it’s not all gas, no brakes. In recognition of the uncertain risks arising from rapid deployment of AI, the NDAA calls for creation of several new internal processes, frameworks and governance frameworks across the Pentagon and intelligence community to identify, measure and mitigate risks from these advanced systems. Moreover, it includes several investment and procurement restrictions to stem the growth of China’s AI industry—including new provisions in the Outbound Investment regime, which the US Treasury Department administers to monitor investments in critical technologies like AI in countries of concern—as well as new prohibitions on the acquisition of certain foreign AI technologies. The focus on AI risks carries through the NDAA’s cyber-focused provisions, which mandate the development of a comprehensive cybersecurity framework for AI and machine learning systems to address adversarial risks.
The following post provides a deeper dive into some of the key provisions specifically related to AI and cybersecurity and what they could mean for industry. It also highlights some notable AI and cyber provisions that were omitted in the negotiation process, which we anticipate Congress will revisit in the coming months.
AI Provisions
The NDAA directs the creation of several new committees to guide the development and procurement of AI systems within the Pentagon and to assess risk posed by third-party models. It also directs the IC to take steps to scale commonly used AI tools and to evaluate the use of commercially available AI tools in classified environments:
- Section 1533 tasks the Secretary of Defense with establishing a cross-functional team for AI model assessment and oversight by June 2026. The team must develop a department-wide assessment framework to enable the development and procurement of AI models by the Pentagon. The assessment framework, to be completed by June 2027, must include standards for performance of AI models, testing procedures, security requirements and principles for the ethical use of AI. Notably, the notion of a federal government framework to guide risk assessments around AI use and procurement is not new. President Biden directed development of an AI Risk Framework via NSM-25 in November 2024, and it was swiftly issued. It’s not clear to what extent the Biden-era AI Risk Framework, which has not been rescinded as of this writing, will inform the Pentagon’s fulfillment of this new directive. While the aforementioned framework included a list of prohibited and high-risk use cases, the NDAA doesn’t direct development of a similar list for the Pentagon. That said, Section 1654 does prohibit the Secretary of Defense from developing or operating a missile defense system with kinetic capabilities that use a subscription-based service, a pay-for-service model or a recurring-fee model to engage or intercept a target—noting that they are inherently government functions that cannot be delegated.
- Section 1534 calls on the Secretary of Defense to create a task force to develop and deploy AI sandbox environments—effectively isolated computing environments—to support the Pentagon’s experimentation with and training and development of AI. The task force must create standard requirements for AI sandbox environments across the department.
- Section 1535 creates the Artificial Intelligence Futures Steering Committee to shape the Pentagon’s long-term AI strategy, identify emerging technologies, and recommend investments in research, workforce development and ethical frameworks. The committee is specifically focused on the emergence of general AI and anticipating broad, transformative AI capabilities and ensuring these advancements are integrated responsibly into defense operations.
- Section 6602 instructs the IC’s Chief Information Officer and Chief Artificial Intelligence Officer to identify commonly used artificial AI systems or functions that might be reused by other IC elements without significant modification. Each Chief Intelligence Officer must also create a policy for sharing custom-developed code with other IC elements. Section 6602(e) specifically directs the heads of IC elements to evaluate the performance of procured and element-developed AI. However, it doesn’t offer much guidance for how these evaluations are to be conducted, nor does it proscribe any specific AI use cases that should be categorically prohibited or classified as high risk, as did the Biden AI Risk Framework referenced above.
- Section 6603 addresses the hosting of publicly available models—such as ChatGPT and Google Gemini—in a classified environment and calls for the creation of policies that set AI testing standards that evaluate their “performance, efficacy, safety, fairness, transparency, accountability, appropriateness, lawfulness, and trustworthiness.” These policies are designed to address common uses, such as machine translation, object detection and object recognition.
The NDAA also provides additional guidance on the contracting process to streamline acquisition of AI systems.
- Importantly, Section 6602(d) directs that the Chief Information Officer of the IC develop model contractual terms for the IC to address “technical data rights and rights related to artificial intelligence dataset requirements … minimize dependency on proprietary information” and “promote the adoption of procurement practices that encourage competition.” These model contract terms are not mandatory, though they will likely inform government contracting practices in this space.
- Section 6603 includes a notable rule of construction, indicating that nothing in this section should be construed to “authorize an officer or employee of the intelligence community to direct a vendor or prospective vendor to alter a model to favor a particular viewpoint.” This language aligns with verbiage included in President Trump’s June 2025 Executive Order on Preventing Woke AI in the Federal Government, which directed agency heads to procure only AI that is “truth-seeking” and “ideologically neutral.” However, it’s uncertain how this directive and the implementation guidance subsequently issued be the Office of Management and Budget will be implemented by major AI companies.
Consistent with growing anxiety around Chinese-owned generative AI systems, the NDAA mandates the exclusion and removal of covered AI from the systems and devices of the Pentagon. It also includes new restrictions on outbound investment in certain covered technologies, such as AI.
- Section 1532 prohibits the Pentagon from using or acquiring AI systems domiciled in covered nations—defined as the Democratic People’s Republic of North Korea, the People’s Republic of China, the Russian Federation and the Islamic Republic of Iran—or subject to foreign influence or control, including DeepSeek and HighFlyer AI systems. No contractor will be allowed to use these AI tools either. The Secretary of Defense may grant a waiver on a case-by-case basis for the purposes of scientific research, training, evaluation or military activities supporting national security functions such as counterterrorism or counterintelligence. Section 6604 similarly instructs the Director of National Intelligence and other heads of the IC to create guidelines that require the removal of DeepSeek from national security systems.
- Section 8521 amends the Defense Production Act to direct that the Treasury Department update its outbound investment rule to further restrict US investment in certain sensitive technologies, such as hypersonic weapons. Specifically, the NDAA empowers the Treasury to (i) add countries to the list of specified countries of concern, including Cuba, Iran, North Korea, Russia and Venezuela, (ii) potentially expand the covered foreign parties under the outbound investment rule, and (iii) impose restrictions on a potentially broader range of national security–related transactions involving those covered persons. The Department of the Treasury is further authorized to establish a publicly accessible, non-exhaustive database that identifies covered foreign persons that are either engaged in a prohibited technology or a notifiable technology transaction.
Cybersecurity Provisions
The NDAA also seeks to streamline cybersecurity requirements while imposing new cybersecurity and AI-related safeguards to mitigate threats from adversaries:
- Section 1511 requires strengthened cybersecurity requirements in any contracts for secure mobile phones and related telecommunications services provided to senior officials and personnel performing sensitive national security functions in the Pentagon within 90 days. These new cybersecurity requirements must include encryption, persistent identifier obfuscation and continuous monitoring capabilities.
- Section 1512 requires the Pentagon, in coordination with other agencies, to establish a comprehensive cybersecurity and governance policy for all AI and machine learning systems used within the Pentagon within 180 days of enactment. The policy must address risks such as adversarial attacks, data poisoning and unauthorized access while ensuring continuous monitoring and incident reporting. It mandates alignment with existing cybersecurity frameworks and integration into the Pentagon’s operations. Additionally, the Secretary of Defense must report to Congress on any additional authorities or resources needed for full implementation.
- Section 1513 directs the creation of a risk-based framework detailing cybersecurity and physical security standards for AI and machine learning technologies procured by the department. The department must “seek to collaborate with industry and academia” in this process.
- Section 1543 requires the Pentagon to study ways to reduce incentives for adversaries to launch cyberattacks on defense-critical infrastructure. The study must analyze economic, operational and strategic factors that make these systems attractive targets and propose measures such as resilience improvements, redundancy and deterrence strategies. Findings and actionable recommendations, including timelines and resource needs, must be reported to Congress.
- Section 6601 mandates the development of security guidance to defend the nation’s AI against technology theft or sabotage by nation-state adversaries. In developing this guidance, the Director of the National Security Agency is to identify vulnerabilities in the nation's cybersecurity and AI supply chain, as well as strategies for these AI technologies to identify, respond to and recover from cyber threats posed by nation-state adversaries. The Director’s security guidance will be shared with other US departments and agencies, as well as research and private-sector entities, at unclassified and classified levels.
The NDAA seeks to bolster how the Pentagon manages its cyber capabilities and limits its ability to reduce certain operational or assessment regimes:
- Section 1501 requires the Pentagon to establish processes for planning, programming and budget coordination specifically for Cyber Mission Force operations. This aims to ensure that cyber capabilities are integrated into broader defense planning and are adequately resourced.
- Section 1503 directs the creation of a framework to incorporate IT technical debt assessments into the annual budget process. Among other things, the Secretary of Defense must “develop a technical debt classification that adequately reflects different types of technical debt” and integrate the framework into department structures “relating to resourcing and programmatic decisions for existing or proposed information technology systems, services, or related programs of record.”
- Section 1504 establishes a Pentagon-wide Data Ontology Governance Working Group to “expand data interoperability, enhance information sharing, and enable more effective decision making throughout the Department.”
- Section 1505 mandates tabletop exercises that develop future force employment concepts and assess different models for command and control of cyber operations, while Section 1507 prohibits the Secretary of Defense from eliminating any “cyber assessment capabilities or red teams” that support operational tests and evaluations for department programs without providing a certification to Congress.
The NDAA also strengthens the State Department’s information security practices. For example, Section 5301 authorizes a POSTDATA Pilot Program to improve secure data handling in embassies, Section 5302 requires the department to issue guidelines tracking the use of commercial cloud enclaves and Section 5303 requires detailed reports on technology transformation projects within the Department of State.
Lastly, the NDAA foreshadows new restrictions on commercial spyware, building on steps taken via executive action by the prior administration that remain in effect:
- Section 5304 conveys the sense of Congress that there is a growing national security need for “responsible procurement and application” of commercial spyware capabilities and that the growing market for these capabilities has enhanced “state and non-state actors’” abilities to target journalists, human rights groups and other members of civil society. While it notes that the United States will “oppose the misuse of commercial spyware” to target vulnerable populations, it doesn’t provide much by way of a blueprint of how it intends to do so.
Notable Provisions Not Included
Of course, not everything makes it into the NDAA. And this year, certain closely contested AI and cyber provisions were left on the Capitol’s cutting room floor. For example:
- AI moratorium: Despite renewed efforts by the House to reinclude a provision on the AI moratorium and President Trump’s frequent calls for a federal standard for AI, there was insufficient Republican and Democratic support for the measure in the NDAA talks. While House Majority Leader Steve Scalise (R-La.) has said that they were looking at “other places” for the AI preemption provisions, other leading Republicans including Senator Josh Hawley remain staunchly opposed. As described in another WilmerHale client alert last week (here), the President has issued an executive order (E.O.) to sustain momentum curbing the impact of state-level AI laws, including laying out the Administration’s commitment to work with Congress in promulgating a “national policy framework for AI” that will preempt “discordant” state laws. While that E.O. doesn’t offer much of a blueprint of what that framework will ultimately say, the E.O. provides that it will not propose preempting state laws regarding child safety protections, AI compute or data center infrastructure, or state government AI procurement. It separately establishes an AI litigation task force to challenge burdensome state AI laws and directs that the Secretary of Commerce evaluate existing state AI laws for referral to the task force and remove certain federal funding.
- Chip controls: The NDAA also does not restrict export of advanced semiconductor chips to the People’s Republic of China (PRC) and other “countries of concern.” While an earlier version included the Guaranteeing Access and Innovation for National Artificial Intelligence (GAIN AI) Act, which would have required chipmakers to give US companies first priority for chip purchases before permitting exports to China and other countries of concern, it reportedly faced strong opposition within the White House. Subsequently, President Trump announced approval for export of advanced H200 semiconductor chips to the PRC—a significantly more advanced chip than the H20 which was approved for export this summer—though it’s still not clear what conditions may attach. We anticipate that the Senate will continue its separate legislative efforts to restrict chip exports, and we are closely monitoring legislative activity in this space.
- CISA 2015: Lastly, the NDAA does not include anything reauthorizing the Cyber Information Sharing Act of 2015 (CISA), which established a voluntary information-sharing process between private entities and the federal government with respect to cyber threat indicators and defensive measures. CISA expired on October 1 and was subsequently renewed on a short-term basis via continuing resolution. It’s currently set to expire at the end of January.
We anticipate ongoing legislative efforts in the coming months to address some of these outstanding issues, and we will be monitoring those developments closely as we continue to advise a wide range of clients on the full panoply of AI-related matters.
Associates Preston Marquis and Ashley Leon provided invaluable assistance to this alert.