This article was first published by Lawfare on July 17, 2023.
On May 30, approximately 350 artificial intelligence (AI) experts penned a letter to express significant concerns about risks associated with AI. The letter stated that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The group’s letter is the latest in a string of warnings about the potential risks—both small and existential—that may result from the development and deployment of AI. Whether or not these concerns will end up being realized, there is consensus among key players in both the private and public sectors about the need for AI regulation now. But conceptions of responsible AI risk management and appropriate regulations are already diverging across jurisdictions. Below is a point-in-time effort to capture the differences between jurisdictions—with a focus on developments in the United States and European Union/United Kingdom, to better digest the rapid development of AI regulation across the globe.
The question isn’t whether AI will be regulated, but how. Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches: The EU has put forth a broad and prescriptive proposal in the AI Act, which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The U.K., in turn, has committed to abstaining from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay. The United States, meanwhile, has pushed for national AI standards through the executive branch but also has adopted some AI-specific rules at the state level (both through comprehensive privacy legislation and for specific AI-related use cases). Between these three jurisdictions, there are multiple approaches to AI regulation that can help strike the balance between developing AI technology and ensuring that there is a framework in place to account for potential harms to consumers and others. Given the explosive popularity and development of AI in recent months, there is likely to be a strong push by companies, entrepreneurs, and tech leaders in the near future for additional clarity on AI. Regulators will have to answer these calls. Despite not knowing what AI regulation in the United States will look like in one year (let alone five), savvy AI users and developers should examine these early regulatory approaches to try and chart a thoughtful approach to AI...
Read the remainder of the article on Lawfare.