In light of the rapid emergence and adoption of artificial intelligence (AI) tools and systems, the Biden Administration convened the CEOs of leading AI companies on May 4, 2023 at the White House and announced several projects to promote what it terms “responsible” AI innovation.
The White House also signaled an openness to the further regulation of these technologies. “[T]he private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products,” Vice President Harris said after the meeting, adding “every company must comply with existing laws to protect the American people.” President Biden stopped by the meeting, which included Sundar Pichai of Alphabet, Dario Amodei of Anthropic, Satya Nadella of Microsoft, and Sam Altman of OpenAI, and he told the executives what they are doing “has enormous potential—and enormous danger.”
To coincide with the CEOs’ meeting, the White House announced three AI initiatives that fund responsible AI research, provide for independent community evaluation of AI systems, and begin the process of establishing US Government-wide AI policy:
Investment in AI Research and Development. The National Science Foundation is investing $140 million to launch seven new National AI Research Institutes (the “Institutes”), bringing the total number of Institutes to 25. The Institutes are charged with catalyzing collaborative efforts across institutions of higher education, federal agencies, industry, and others. They pursue innovate AI in a way that is ethical, trustworthy, responsible, and serves the public good, driving breakthroughs in areas such as climate, agriculture, energy, public health, education, and cybersecurity. This investment adds to the billions that private sector companies are pouring into advancing the technology.
Community Testing of Existing Generative AI Systems. Leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, agreed to participate in a public evaluation of existing AI systems, consistent with responsible disclosure principles, in August 2023 at a hacker convention called DEFCON 31. Notably, the AI models will be evaluated or tested by the community, independent of the government or the companies that developed them. The assessments will be measured against the principles outlined in policy documents released by the Biden Administration in 2022 and early 2023, The Blueprint for an AI Bill of Rights and the National Institute of Science and Technology’s AI Risk Management Framework. Based on the learnings of these assessments, AI innovation can be improved, where necessary.
Draft Policy Guidance. In the summer of 2023, the U.S. Office of Management and Budget (OMB) will release for public comment a draft policy guidance on the use of AI systems by the US government. This guidance will serve to establish specific policies for federal departments and agencies to follow to ensure their development, procurement, and use of AI systems centers on safeguarding individual rights and safety.
Existing Agency Guidance and Administration Policy
The White House framed these initiatives as “build[ing] on the considerable steps” the Administration has already taken to promote the responsible development of AI. Additional efforts to bolster AI hygiene include the aforementioned Blueprint for an AI Bill of Rights, which identifies principles that should guide the design, use, and deployment of automated systems to protect the American public, the AI Risk Management Framework, which sets forth principles for managing risks related to validity and reliability, safety, security and resiliency, explainability and interpretability, privacy, and fairness and bias, and a roadmap for standing up a National AI Research Resource (NAIRR), released earlier this year by the NAIRR task force.
Other agencies are releasing AI guidance at a swift clip. For example, in April 2023, four federal agencies—the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), the United States Department of Justice (DOJ), and the U.S. Equal Employment Opportunity Commission (EEOC)—released a joint statement pledging vigorous use of their respective authorities to protect against discrimination and bias in automated systems. Also in April, the Department of Homeland Security announced the launch of a task force dedicated to using AI to advance critical homeland security missions. And in May, the FTC released a blog post cautioning companies about the use of generative AI tools to change consumer behavior. The post explained that the manipulative use of generative AI can be illegal, even if not all customers are harmed and even if those harmed do not comprise a class of people protected by anti-discrimination laws.
The announced initiatives and White House statements provide signals for what companies should do to develop and deploy AI tools in a fashion that minimizes regulatory risks.
First, the President “underscore[d] that companies have a fundamental responsibility to make sure their products are safe and secure before they are deployed or made public,” according to a White House readout of the meeting. Accordingly, entities should validate the safety and security of an AI system before deploying it for broad use.
Second, the White House emphasized the “imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security,” including “risks to safety, security, human and civil rights, privacy, jobs, and democratic values.” The Administration underscored the importance of the CEOs’ personal leadership and called on them to “model responsible behavior” and “take action to ensure responsible innovation and appropriate safeguards.” Thus, AI companies should design their systems with trust, safety, and risk mitigations in mind. (We discussed the top ten business and legal risks of generative AI in an earlier client alert.)
Third, of the announced initiatives, the OMB guidance may well become the most significant, given that the US government is the largest purchaser in the world, and what AI systems it procures and how it weighs individual rights and safety in those decisions, may have significant impacts on the commercial market.
This is a quickly developing area, and we are happy to answer any questions you may have. You can also stay on top of all of our updates by subscribing to the WilmerHale Privacy and Cybersecurity Blog.