Managing Legal Risk in the Age of Artificial Intelligence: What Key Stakeholders Need to Know Today

Managing Legal Risk in the Age of Artificial Intelligence: What Key Stakeholders Need to Know Today

Blog Keeping Current: Disclosure and Governance Developments

Introduction 

2026 is poised to be a transformative year for artificial intelligence (AI) as businesses move beyond targeted pilot programs to enterprise-wide implementation. While AI is poised to unlock new efficiencies and drive innovation, it also has introduced new legal and regulatory risks. This article outlines a practical set of considerations addressing the most current and consequential challenges arising in the AI era for key stakeholders, including Board members, Regulatory Counsel, Privacy Officers, Commercial Attorneys, and IP Counsel.

1. Board Members: Fiduciary Duties to Oversee AI Use 

Over the coming year, AI reliance will continue to expand across core strategic, operational, compliance, and customer‑facing functions in major industries. As this transformation accelerates, Boards could face heightened legal exposure if they fail to exercise adequate oversight of AI‑related risks. Most are not ready. According to the NACD’s 2025 Board Practices and Oversight Survey, only 36% of Boards have implemented a formal AI governance framework, and just 6% have established AI‑related management reporting metrics. 

Under the seminal Caremark doctrine—originating from a landmark Delaware Court of Chancery decision that set the standard for director oversight—Board members may be liable to shareholders if those Board members (1) fail to implement a functioning system for reporting or compliance, or (2) consciously ignore red flags within an existing system. Recent Court of Chancery decisions suggest that courts may be more willing to allow oversight‑related claims to proceed when there is a plausible allegation that a Board’s compliance mechanisms were superficial or ineffective in practice. 

Although Delaware courts have not yet confronted Caremark claims in an AI-specific context, existing precedent sets the stage. Where AI is integral to a company’s core products, safety‑critical functions, or heavily regulated operations, it will likely be treated as a “mission‑critical” risk—heightening the Board’s oversight obligations. Conversely, where AI plays a limited role in a company’s operations and offerings, Boards may be wise to avoid adopting unnecessarily elaborate governance structures. The key is proportional, well‑documented oversight that reflects the importance of AI to the enterprise.

Boards should begin by assessing how deeply AI is embedded in the company’s operations and then tailor their oversight accordingly. Understanding where and how AI is being used informs whether existing governance structures are sufficient or whether enhancements are needed. 

While the full Board should stay informed about the company’s AI activities, targeted adjustments can significantly strengthen oversight. If AI use is limited, updating existing reporting channels may suffice; if significant, Boards should consider designating a dedicated committee or responsible executive. In all cases, reporting lines and accountability should be clearly defined in both practice and written materials, ensuring that oversight keeps pace with the company’s AI footprint and that governance structures align with the scale of the technology’s use.

A helpful resource for Boards beginning their AI governance journey or ready for the next stage of that journey is the EqualAI Governance Playbook for Boards, which offers practical guidance for overseeing AI risk. 

2. Regulatory Counsel: Navigating Evolving Compliance Risks 

AI regulation in the United States is accelerating at breakneck speed. More than 1,000 state‑level AI bills were introduced in 2025, and a wave of new laws have already taken effect—targeting everything from deepfakes and intimate‑image abuses to automated decision‑making and data‑privacy requirements across employment, lending, healthcare, education, and other essential services. Yet the legal terrain keeps shifting: many state statutes lean on vague concepts like industry best practices or national and international frameworks, leaving companies to guess what responsible AI development and use really require. Meanwhile, Congress is actively debating whether—and how—to preempt state AI laws, with President Donald Trump issuing an executive order in December signaling a move toward a single “minimally burdensome national standard.” 

Against this backdrop, companies are nevertheless racing to adopt and tout AI capabilities, creating a real risk of overhyping what their systems can actually do. Investors, regulators, and consumers are taking notice. In In re GigaCloud Tech. Inc. Securities Litigation, for example, a court found that statements in offering documents describing the company’s AI‑enabled logistics tools were actionable because the company did not, in fact, use AI as advertised—an early sign that “AI-washing” may trigger securities liability. The risk of misrepresentation or deception is not limited to securities‑law exposure; it also falls squarely within the jurisdiction of the Federal Trade Commission and state attorneys general. Thus, even privately held companies need to account for those regulatory risks, not just the risk of securities litigation.

Regulatory counsel should ensure that a reliable system, maintained internally or supported by outside counsel, is in place to track evolving legal requirements and benchmark emerging industry norms. Moreover, regulatory counsel should require coordinated technical and legal review of all public AI‑related statements, from marketing materials to earnings calls, to ensure those statements accurately reflect capabilities and expectations.

3. Privacy Officers: Protecting Organizational Data

As organizations adopt AI tools at speed, employees often lack clarity about what data they can safely input into these systems and how much they can rely on AI‑generated outputs. Without clear guardrails, teams may inadvertently input sensitive information—such as patient identifiers, HR files, financial records, or confidential business materials—into unvetted tools, or rely on outputs that contain inaccuracies, embedded sensitive data, or undisclosed biases. These gaps heighten the risk of data abuses, regulatory noncompliance, and reputational harm, particularly as AI tools become more deeply embedded in everyday workflows.

To address these risks, privacy officers, who oversee data‑protection compliance and manage policies governing personal information, may be best positioned to implement practical policies that govern employee use of AI through internal acceptable‑use rules and governance frameworks. These policies should consider whether to restrict what data may be entered into prompts based on the permissions granted to employees or the applicable legal basis for data processing. These policies may, for example, decide to exclude inherently sensitive information such as personal identifying information and any data covered by third‑party confidentiality obligations. Employees should be prohibited from pasting internal content into public AI tools; however, if an enterprise agreement is in place, using public tools for this purpose may be permitted. Finally, employees should be required to validate AI outputs for accuracy and to promptly escalate any cybersecurity concerns or issues where outputs contain sensitive data or appear biased. 

4. Commercial Attorneys Contracting With Third-Party Vendors 

As vendors increasingly embed AI into their products, companies and their commercial attorneys must look beyond the standard vendor agreements to address issues related to AI. In particular, the inclusion of AI necessitates a more sophisticated approach to risk allocation, while taking into account those unique harms that AI can trigger—such as hallucination, algorithmic bias, drift, silent adoption, and unintended massive data scraping.

Traditional commercial agreement templates have been prepared to transact mostly “static” products—products and their logic that do not change unless a developer pushes out a new update. Products that embed AI, however, are “dynamic” and “probabilistic”—the outputs of these products can change based on the data they ingest, and the models are trained on data to make predictions or decisions. Such fundamental mismatch makes traditional commercial agreement templates insufficient when it comes to AI-specific risk allocation. 

Traditional vendor product life cycle management is also inadequate for AI-embedded products. Until now, once a vendor product contract was “signed up” with the company, commercial legal teams rarely participated in the life cycle management of such products, other than for contract amendment purposes. Because AI-related risks can ebb and flow throughout the life cycle of a vendor product, commercial legal teams must now remain actively engaged after the initial deployment of such product to assess and manage product risk allocation. Three risks in particular are worth highlighting: 

First, AI models are known to “hallucinate”—i.e., confidently provide false information. AI-embedded products may also produce discriminatory outputs that violate applicable laws (e.g., civil rights laws or the EU AI Act). The parties to a vendor agreement must appropriately allocate the risks and liabilities heightened by the use of AI in vendor products, recognizing that the risks are further increased for products that are used for hiring, lending, performance reviews or healthcare. Second, AI models also “drift”—such as when a model’s performance, accuracy, or behavior changes over time due to shifts in tuning and underlying data. An AI model that passed a security review or audit may “drift” post-review/audit and become noncompliant thereafter. Third, vendors may add AI capabilities to their products mid‑contract, creating “contractual gaps” where existing terms no longer address key risks. Traditional vendor agreements lack provisions that can resolve issues arising from an AI model’s use of a company’s data, breach of security, ownership of newly generated or improvements to intellectual property rights, and a company’s rights to audit the vendor, including visibility into vendor updates to the AI model and data flows. Limited transparency into evolving functionality and risk profiles increases the likelihood of privacy, security, bias, and performance issues, as well as uncertainty around how related liabilities are allocated.

To reduce these risks, companies should adopt an AI‑aware vendor‑management framework. 

Companies should update vendor agreements to address AI‑embedded products and equip commercial teams with a negotiation playbook for AI‑related deals. Templates should require vendors to disclose all AI features at signing and throughout the contract term, notify the company of material model changes, comply with AI‑specific data‑governance and security obligations, and allow data‑protection impact assessments. Organizations must also decide whether vendors may train on company data—and, if permitted, what data types are allowed. Risk‑allocation terms (representations and warranties, liability limits, indemnities) should be tailored to match the company’s risk appetite. Operationally, companies should create a structured process to identify when vendor tools incorporate AI. Business teams must route AI‑enabled tools through the AI review process, and the company should maintain centralized records of approvals and assessments. These steps ensure third‑party AI remains aligned with organizational risk requirements as vendor technologies evolve.

5. IP and Data Strategy: Ownership and Use of AI‑Generated Outputs and Derivatives 

The United States and the European Union do not accept AI as inventors of patents or authors of copyrighted work. Agreements, however, still must address ownership of IP generated under such agreements, including those generated through AI and any derivatives thereof, because lack of a contractual arrangement between the parties will result in the application of default legal ownership—such that the company and the vendor will own any inventions or copyright generated by such party—which may not align with the company’s strategic goals. Organizations should determine their desired IP and data ownership and use strategies before contracting.

Even though AI is not recognized as an inventor or an author, private parties to a contract can contractually allocate ownership of data and IP generated under such contract. Companies should decide whether they want to own all or portions of these outputs, whether they are willing to license any rights back to the vendor, and how they will approach patent prosecution, enforcement, and defense. Establishing these rules up front ensures the company—not individual users or vendors—retains control over AI‑enabled innovations.

Next Steps 

The accelerating adoption of AI has ushered in a new era of legal complexity and risk. By proactively strengthening oversight, aligning public statements with reality, and embedding responsible‑use frameworks today, organizations can meet this moment with confidence. 

 

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link. (The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.