As AI systems became more deeply embedded in consumer‑facing products and services throughout 2025, regulators and private plaintiffs continued to test how existing privacy and consumer‑protection laws apply to the collection, use, and commercialization of personal data in AI development and deployment.
Over the past year, federal and state enforcement agencies intensified their scrutiny of AI‑related practices, focusing in particular on unsubstantiated marketing claims, opaque or inaccurate data use disclosures, and risks to children. At the same time, private litigants advanced a wide range of novel theories—challenging everything from AI training practices to the use of automated chatbots under long‑standing electronic‑communications statutes. Courts responded with mixed results, offering early but important signals about disclosure obligations, consent, and the limits of applying legacy privacy laws to emerging technologies.
This article surveys key privacy‑related litigation trends involving AI in 2025, including enforcement actions by regulators, private lawsuits, and related legislative and judicial developments shaping the future of AI governance. To stay up to date on these developments, subscribe to the WilmerHale Privacy and Cybersecurity Law Blog.
I. Consumer-Protection Actions
In 2025, government entities continued to scrutinize data-related practices of AI models and AI-enabled products and services. While many enforcement actions did not turn exclusively on privacy-related theories of liability, they nevertheless reflect growing interest in how companies collect, use, and describe data in connection with AI.
State Actions
State attorneys general (AGs) increased their focus on AI‑related consumer‑protection and privacy risks throughout 2025. As we discussed in a recent blog post, state AGs across the political spectrum are particularly focused on concerns related to AI chatbots. In August 2025, for example, a bipartisan coalition of state AGs issued a joint warning to leading AI developers, emphasizing that companies would be held accountable for harms stemming from AI systems’ access to and use of consumer data—especially where those systems may affect children.1 The AG for Texas, too, announced an investigation focused on alleged representations that chatbots can serve therapeutic purposes.2 These ongoing efforts suggest that, even absent comprehensive federal AI legislation, state regulators are prepared to use existing consumer‑protection tools to influence AI product design and data‑governance practices.
Federal Actions
At the federal level, the Federal Trade Commission (FTC) continued to leverage its authority over consumer-protection matters to scrutinize companies developing or deploying AI tools, with a focus on allegedly deceptive or unsubstantiated marketing claims. Late in the Biden administration, the FTC launched “Operation AI Comply,” an enforcement initiative aimed at curbing false or misleading representations about AI capabilities and outcomes. Although that effort largely carried forward in the second Trump administration, last year the agency signaled some willingness to revisit certain prior decisions in light of evolving executive‑branch priorities around minimizing regulation in the AI sector, finding in at least one case that a prior consent order “unduly burden[ed] innovation in the nascent AI industry.”3
The FTC brought several Section 5 unfair or deceptive conduct actions against companies accused of overstating the capabilities or benefits of their AI products, seeking injunctions, monetary relief, and—in at least one case—a permanent ban on offering AI‑related services.4 In parallel, the agency distributed more than $15 million in connection with allegations that a developer using AI tools stored, used, and sold consumer information without their knowledge.5 This action underscored the connection between traditional privacy theories and consumer‑protection enforcement against developers harnessing AI.
Beyond enforcement, the FTC also relied on its Section 6 investigatory authority to examine the practices of technology companies offering AI‑powered chatbots and companion tools.6 These inquiries sought detailed information about data‑collection practices, model training, retention policies, and safeguards designed to protect minors, with particular attention to compliance with the Children’s Online Privacy Protection Act.7 Although these investigations have yet to yield sweeping public enforcement outcomes, they reflect the agency’s sensitivity to the privacy implications of AI chatbots and signal likely areas of future scrutiny.
Private Actions
Private plaintiffs, for their part, tested increasingly novel consumer‑protection theories in cases challenging AI development and deployment. In one lawsuit, for example, a plaintiff alleged that a company had unlawfully exploited the “cognitive labor” generated through user interactions with its AI system by capturing and using that data without compensation.8 Although the court ultimately dismissed the claims for failure to state a cognizable legal theory, the case illustrates the creative—and occasionally expansive—approaches plaintiffs have pursued in attempting to characterize AI data practices as unfair or deceptive.
II. Privacy Laws
A second—and increasingly consequential—strand of AI‑privacy litigation in 2025 involved efforts to extend existing electronic‑communications and privacy statutes to AI‑enabled tools and data‑collection practices. Courts were asked to determine whether long‑standing prohibitions on unauthorized interception, disclosure, or misuse of personal information can accommodate technologies that replace or augment human interaction, collect data at scale, and repurpose that data for model development or improvement.
AI Chatbots and Electronic-Communications Statutes
Several cases tested whether AI chatbots deployed in customer‑service or consumer‑interaction settings constitute unlawful interception under state and federal electronic‑communications laws. In Taylor v. ConverseNow Technologies, for example, a federal court allowed a putative class action claim under the California Invasion of Privacy Act (CIPA) against an SaaS company that allows restaurants to process customer phone calls using an AI assistant to proceed past the motion‑to‑dismiss stage.9 The court focused on whether the chatbot provider could be treated as a “third party” interceptor, distinguishing between data used exclusively to benefit the consumer and data leveraged for the provider’s own commercial purposes, including system improvement. Where consumer data allegedly served both roles, the court found plausible grounds for liability under CIPA’s wiretapping provisions.10
By contrast, other courts have been more skeptical of attempts to apply electronic‑ communications statutes to AI training practices. In Rodriguez v. ByteDance, for example, the court dismissed claims brought under CIPA and the federal Electronic Communications Privacy Act, concluding that allegations that the technology company used personal data to train AI systems were overly speculative absent more concrete facts about interception or disclosure.11
AI Training Data and Invasion-of-Privacy Claims
Some lawsuits also involved allegations that companies collected or repurposed consumer data without adequate disclosure or consent. In Riganian v. LiveRamp, for instance, a putative class of consumers survived early dismissal after alleging that a data broker used AI tools to collect, combine, and sell personal information drawn from both online and offline sources.12 The court concluded that plaintiffs had plausibly alleged invasive and nonconsensual data practices sufficient to support common‑law privacy claims under California law, as well as CIPA and the federal Wiretap Act.
III. Related Developments—State Legislative Action and the Courts
While privacy‑related AI litigation continued to develop in the courts in 2025, state legislatures and court systems also took steps that might affect the future of privacy-related AI litigation.
As our team at WilmerHale has explained, in 2025 state legislatures across the country focused on AI regulation, with California, Colorado, and Texas working to implement new laws expressly addressing AI systems. In addition, in 2025 more than half of the states enacted laws aimed at addressing privacy concerns stemming from the creation and spread of “deepfakes”—that is, the malicious digital alteration and dissemination of a person’s body or voice.13 Lawmakers targeted AI-related privacy and data transparency concerns more broadly, including with respect to customer service bots and potentially discriminatory AI model outputs.14 State legislators and AGs also continue to broadly oppose federal preemption of state AI laws, to allow states to continue playing a role in AI governance.15
Courts themselves also emerged as important institutional actors in AI governance. For example, the Arkansas Supreme Court adopted a rule requiring legal professionals to verify that AI tools used in connection with court work do not retain or reuse confidential data, and warning that failure to do so could constitute professional misconduct. Other jurisdictions, including New York and Pennsylvania, issued similar guidance restricting the use of generative AI in ways that could compromise client confidentiality or judicial integrity.16
* * *
Companies developing or deploying AI technologies should continue to monitor this rapidly evolving landscape as courts, regulators, and legislatures refine the contours of permissible data use. WilmerHale has a deep bench of attorneys with experience advising on AI‑related legal issues across the litigation, regulatory, transactional, and intellectual‑property contexts. The firm regularly works with AI model developers, testers, and deployers, counseling clients on evolving federal, state, and international AI legislative and regulatory frameworks. WilmerHale will continue to track these developments and assist clients in navigating the complex legal challenges posed by AI.
Please join us at one of our East Coast offices—DC, NYC, or Boston—for a practical update on what’s ahead in 2026, including new state privacy and AI laws, enforcement and litigation trends, breach risks, and actionable compliance strategies. After the briefing, there will be a networking reception. CLE credit is available.
Full details and an RSVP link can be found here.
Footnotes:
- Joint Letter to AI Industry Leaders on Child Safety, Nat’l Assoc. Attys. Gen., https://www.naag.org/policy-letter/joint-letter-to-ai-industry-leaders-on-child-safety (Aug. 25, 2025).
- See Attorney General Ken Paxton Investigates Meta and Character.AI for Misleading Children with Deceptive AI-Generated Mental Health Services, https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-investigates-meta-and-characterai-misleading-children-deceptive-ai (Aug. 18, 2025).
- FTC Reopens and Sets Aside Rytr Final Order in Response to the Trump Administration’s AI Action Plan, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/12/ftc-reopens-sets-aside-rytr-final-order-response-trump-administrations-ai-action-plan (Dec. 22, 2025).
- See, e.g., FTC Sues to Stop Air AI from Using Deceptive Claims About Business Growth, Earnings Potential, and Refund Guarantees to Milk Millions from Small Businesses, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-sues-stop-air-ai-using-deceptive-claims-about-business-growth-earnings-potential-refund (Aug. 25, 2025); FTC Case Against E-Commerce Business Opportunity Scheme and Its Operators Results in Permanent Ban from Industry, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-case-against-e-commerce-business-opportunity-scheme-its-operators-results-permanent-ban-industry (Aug. 25, 2025); FTC Approves Final Order Against Workado, LLC, Which Misrepresented the Accuracy of Its Artificial Intelligence Content Detection Product, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-approves-final-order-against-workado-llc-which-misrepresented-accuracy-its-artificial (Aug. 28, 2025).
- FTC Sends Payments to Consumers Impacted by Avast’s Deceptive Privacy Claims, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/12/ftc-sends-payments-consumers-impacted-avasts-deceptive-privacy-claims (Dec. 2, 2025).
- FTC Launches Inquiry into AI Chatbots Acting as Companions, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions (Sept. 11, 2025).
- 15 U.S.C. §§ 6501-6506 (2018).
- Small v. OpenAI, 2025 U.S. Dist. LEXIS 201648 (E.D.N.C. Oct. 10, 2025).
- 2025 WL 2308483 (N.D. Cal. Aug. 11, 2025).
- The court followed reasoning shaped by Yockey v. Salesforce, Inc., 745 F. Supp. 3d 945 (N.D. Cal. 2024).
- 2025 WL 2495865 (N.D. Ill. Aug. 25, 2025). For another example of a case dealing with party status under electronic-communications acts, see Q.J. v. PowerSchool Holdings, No. 1:2023cv05689 (N.D. Ill. 2025).
- 791 F. Supp. 3d 1075 (N.D. Cal. 2025).
- As AI Tools Become Commonplace, So Do Concerns, Nat’l Conf. State Leg., https://www.ncsl.org/state-legislatures-news/details/as-ai-tools-become-commonplace-so-do-concerns (Nov. 11, 2025).
- Id.
- State Attorneys General Urge Congress to Preserve Local Authority on AI Regulation, Nat’l Assoc. Attys. Gen., https://www.naag.org/policy-letter/state-attorneys-general-urge-congress-to-preserve-local-authority-on-ai-regulation (Nov. 25, 2025).
- Interim Policy on the Use of Artificial Intelligence, N.Y. Unif. Court Sys., https://www.nycourts.gov/LegacyPDFS/a.i.-policy.pdf (Oct. 2025); Interim Policy on the Use of Generative Artificial Intelligence by Judicial Officers and Court Personnel, Penn. Unif. Court Sys., https://www.pacourts.us/assets/opinions/Supreme/out/Attachment%20-%20106502825326188944.pdf?cb=1.