FTC Warns Companies of the Potentially Deceptive Uses of AI Tools

FTC Warns Companies of the Potentially Deceptive Uses of AI Tools

Blog WilmerHale Privacy and Cybersecurity Law

On March 20, 2023, the Federal Trade Commission (FTC) released a blog post advising companies to consider the potentially deceptive or unfair use of artificial intelligence (AI) tools to generate synthetic media. The FTC calls the use of AI to create or spread deception “the AI fake problem,” and it is a rising issue. According to the FTC, fraudsters have utilized generative AI and synthetic media to generate and proliferate false narratives at mass scale and at low cost. The agency cautions that AI chatbots can be used to create phishing emails, fake websites and fake profiles and to help generate malware, ransomware and prompt injection attacks.

Over the past few months, the FTC has kept a close watch on the development of AI technology, issuing several pieces of AI-related guidance. For example, in February 2023, the agency published a blog warning companies that rely (or purportedly rely) on AI not to exaggerate claims of their products or fail to account for reasonably foreseeable risks to consumers. The FTC’s recent blog post signals its continued focus on this topic. Companies should recognize that the FTC can initiate enforcement actions to penalize conduct it views as unfair or deceptive.

Blog Post Summary

The FTC directs companies to consider four questions before making, selling or using AI:

1. Should you even be making or selling it? The FTC directs companies that develop or offer synthetic media or generative AI products to consider at the design stage and thereafter the reasonably foreseeable ways their products may be misused for fraud or to cause harm.

2. Are you effectively mitigating the risks? The FTC directs companies that develop or offer AI products to take “reasonable precautions” before entering the market. The FTC will consider as insufficient issuing warnings to consumers about misuse or requiring users to make disclosures. Instead, the agency advises companies to create deterrence measures that are “durable, built-in features” and “not bug corrections or optional features,” which bad actors can modify or remove. The FTC adds that companies should think twice about whether a product really needs to be anthropomorphized or emulate humans or if it can be just as effective by acting like a bot.

3. Are you over-relying on post-release detection? Although the FTC acknowledges researchers are improving the ability to detect AI-generated content, they are nevertheless “in an arms race with companies developing the generative AI tools, and the fraudsters using these tools will often have moved on by the time someone detects their fake content.” The FTC suggests the burden should be on companies—not the consumers—to determine if a generative AI tool is being used to deceive.

4. Are you misleading people about what they’re seeing, hearing or reading? The FTC directs advertisers to think twice before using AI-generated content. The agency notes that it has warned companies about misleading consumers via fake dating profiles, phony followers, deepfakes, chatbots and the like, and it has brought enforcement actions against them.

Chatbots and generative AI pose a host of legal and business risks, which we detailed in a recent article. As the FTC’s recent guidance demonstrates, one of these risks is from consumer-protection agencies, which may use laws designed to prevent deceptive trade practices to regulate AI tools. As the FTC notes in its blog, it has brought enforcement actions against companies for utilizing fake online content and has required companies to destroy the underlying algorithms powering their systems. Companies that use this technology should proceed cautiously and carefully review the FTC’s guidance and best practices for developing and using AI.

Other regulators are reviewing AI tools, including Italy’s privacy regulator, which recently temporarily limited the processing of data of individuals residing in Italy by OpenAI’s ChatGPT. We will continue to provide updates on major developments in the legal and business risks of generative AI and more. You can also stay on top of all of our updates by subscribing to the WilmerHale Privacy and Cybersecurity Blog.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.