Four Federal Agencies Reaffirm Authority to Monitor Automated Systems for Unlawful Discrimination and Other Federal Law Violations

Four Federal Agencies Reaffirm Authority to Monitor Automated Systems for Unlawful Discrimination and Other Federal Law Violations

Blog WilmerHale Privacy and Cybersecurity Law

On April 25, 2023 four federal agencies—the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), the United States Department of Justice (DOJ), and the U.S. Equal Employment Opportunity Commission (EEOC)—released a joint statement pledging vigorous use of their respective authorities to protect against discrimination and bias in automated systems.

Although the statement does not break any new ground, it illustrates that federal agencies are concerned about how quickly the technology around AI and other automated systems is advancing and, we think, amounts to a tacit acknowledgement that any comprehensive AI legislation or regulation is unlikely in the near term. The statement asserts that the agencies’ enforcement authorities apply to automated systems and that those systems may contribute to unlawful discrimination or otherwise violate federal law. Each of these agencies has issued guidance or taken action in relation to automated systems already, stressing the relevance of their existing legal authorities to innovative technologies, even if it may not be immediately apparent how exactly those authorities apply to technological changes. This joint statement is a reminder that entities need to thoughtfully approach how they deploy automated systems that are used to make important decisions about individuals to ensure those decisions align with the law.   

Broad Definition of Automated Systems

The joint statement defines “automated systems” broadly; it covers not just AI, but any software and algorithmic processes “that are used to automate workflows and help people complete tasks or make decisions.” This is an expansive definition that encompasses many algorithms used by businesses and other applications that leverage consumer data. 

That statement focuses on three sources of potential discrimination:

Data and Datasets - Automated systems need large amounts of data to find patterns or correlations and then apply those patterns to new data.  Issues with the underlying data can affect how the system makes decisions. For example, automated system outcomes can be skewed by unrepresentative datasets.  These datasets could also contain baked-in biases, which could lead to discriminatory outcomes when applied to new data. 

Model Opacity and Access - Automated systems are complex and most people, sometimes even those who develop the tools, are unaware of exactly how these systems work; this lack of transparency makes it difficult for entities to assess whether their automated system is fair.

Design and Use - Developers might design an automated system based on flawed assumptions about its users, relevant context, or the underlying traditional practices that the system is replacing.

Existing Agency Guidance

The four agencies that issued the joint statement are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, and consumer protection. All four have previously expressed concern about the potential harm of AI systems either through statements, guidance or through enforcement actions. For example, in a 2022 circular, the CFPB confirmed that federal consumer protection laws apply regardless of the technology being used, and that the fact that technology used to make a credit decision was too complex is not a defense for violating these laws. The FTC has previously issued a report evaluating the use and impact of AI in combatting online harms, highlighting that AI tools can be discriminatory and incentivize relying on invasive forms of commercial surveillance. The FTC has also warned market participants that the use of automated tools that have discriminatory impacts might violate the FTC Act, and that making unsubstantiated claims about AI or deploying AI before taking steps to evaluate and minimize risk could be violations, as well.

Takeaways and Conclusion

The joint statement and recent agency guidance make clear that the CFPB, FTC, DOJ, and EEOC will monitor the development and use of automated systems to protect consumers, promote fair competition, and prevent discrimination. Companies using automated systems should keep this guidance in mind and assess the risk and potential harmful impacts of these systems. 

  1. Companies using automated systems should establish sound governance processes that would include (a) inventorying automated systems; (b) assigning risks to the systems based on such factors as their potential impact on consumers and current and prospective employees; (c) documenting system design and testing; and (d) implementing a robust change management process.
  2. Companies should understand what biases might spread from skewed datasets. For instance, datasets containing disproportionate data points about certain demographic groups could lead to automated systems perpetrating further discrimination.
  3. Entities should understand how their automated systems work and make decisions, so they can evaluate and address any potential biases in the design of the system that could lead to discriminatory outcomes.
  4. Businesses should understand the users that will use their AI systems and in what context to mitigate unintended discriminatory outcomes.

We are happy to answer any questions you may have. You can also stay on top of all of our updates by subscribing to the WilmerHale Privacy and Cybersecurity Blog.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.