Quick Take: Biden Administration Seeks to Shape Domestic and International Approach to AI Through Executive Order

Quick Take: Biden Administration Seeks to Shape Domestic and International Approach to AI Through Executive Order

Blog WilmerHale Privacy and Cybersecurity Law

Today, the Biden Administration released its highly anticipated Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, setting forth a broad vision of the Administration’s legal, regulatory, and policy approach to the development and implementation of Artificial Intelligence in the United States. Through the EO, the Administration seeks to assert American leadership in an area where there is concern that the federal government’s efforts are not keeping pace with the speed at which the technology is being adopted or regulated in other countries. The EO also makes clear that the Administration will require measures to ensure safe and responsible development and use of AI, but that it will take an overall cautious approach to regulation in this area in order to ensure continued innovation and not unnecessarily stifle the economic potential of the technology.

The EO provides a roadmap of some of the key issues that the Administration has been focused on as it seeks to develop guardrails for the technology. Specifically, the EO directs action around new standards for AI safety and security, measures to protect Americans’ privacy, consumer and workforce protections, promoting innovation and competition, advancing American leadership abroad, and ensuring responsible and effective government use of AI.

The EO should be of interest to clients across a wide variety of industries, with heightened immediate importance to government contractors and large technology companies that are developing advanced AI systems. The EO sets forth areas where the Administration is directing the development of appropriate standards going forward, including guidance for federal government agencies’ use of AI, for red-teaming models, and for detecting AI-generated content and authenticating official content. Meaningful resources will be dedicated across a broad range of government agencies to create and implement these standards as well as to develop other policies and reports required by the EO. Companies developing AI products or using AI in any way (or that are planning to do so in the future)—which is certain to be most companies—will need to be cognizant of these standards and recommendations as they are being developed. Our experience indicates that companies should not be waiting for these standards and recommendations to be rolled out, but should be carefully watching agency developments and anticipating future guidelines based on the areas of demonstrated interest from the Administration.

The EO also highlights key areas where the Administration (and others) have been raising concerns about potential risks of discrimination and other consumer protection concerns— including in education, health care, housing, and employment. At the same time, the Administration also is focusing on ways to promote innovation and competition. AI is a groundbreaking yet disruptive technology, with many areas of potential concern. However, it is also an area where there are meaningful opportunities both for companies and for consumers. The EO attempts to balance these considerations and ensure orderly technological development that provides benefits without raising substantial new or expanded risks.

The EO also focuses on critical concerns about privacy and cybersecurity. The EO endorses the need for bipartisan national privacy legislation to protect all Americans (with a focus on children). It further provides support for the development of “privacy enhancing technologies” as a means of realizing the benefits of AI without raising new privacy concerns. The EO also recognizes that there are meaningful cybersecurity concerns raised by AI technology, and establishes an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software. 

Companies will want to review the EO to understand whether there are immediate obligations being imposed on them. For example, the EO—leveraging the authority of the Defense Production Act—will require companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety to share the results of red-teaming efforts with the federal government. However, it is important to recognize that most of the EO is focused on future efforts by the federal government that are intended to shape standards, norms, and legal requirements going forward. Despite the fact that these requirements do not impose immediate obligations on most companies, it is critical that companies understand the issues that the EO is concerned with and pay close attention to how federal agencies approach the requirements of the EO over the coming months as many of the requirements are intended to eventually flow down to the private sector. 

Our Artificial Intelligence Practice is closely reviewing the EO and preparing a more detailed alert to assist our clients in understanding the EO and its implications for their businesses. If you have questions or concerns about the EO and its effect on your business that you would like to discuss with our team, please reach out to us. 

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.