European Parliament Adopts Negotiating Position on the AI Act

European Parliament Adopts Negotiating Position on the AI Act

Blog WilmerHale Privacy and Cybersecurity Law

On June 14, 2023, the European Parliament adopted its negotiating position regarding the proposal of the European Commission for a regulation laying down harmonized rules on artificial intelligence. This is the most recent step in the negotiation process toward a final version of the Artificial Intelligence Act (AI Act).

The aim is to reach an agreement by and among the Parliament, the European Commission and the Council of the European Union on a final version of the text by the end of this year despite some important disagreements, especially over the use of real-time remote biometric identification for law enforcement purposes.

The Parliament’s adoption of the AI Act comes at a time when legislators and regulators around the globe are grappling with how to regulate artificial intelligence (AI). If finalized, the AI Act will have an extraterritorial impact given its scoping provisions. At the same time, as is the case with the General Data Protection Regulation (GDPR), other jurisdictions may also look to the European Union (EU) as a model as they develop their own rules. Thus, businesses that are developing and using AI should be well aware of the AI Act, as it may set the standard globally.

The AI Act: An Impact Far Beyond the EU

Like the Commission, the Parliament wants the AI Act to apply to businesses beyond EU borders. Both the Commission and the Parliament agree that the AI Act should apply to providers placing AI systems on the market or in service within the EU, irrespective of their place of establishment. These two parties also generally agree that the AI Act should apply to importers and distributors of AI systems and users (the Parliament prefers to call these “deployers”) of such systems who are in the EU. However, the Parliament has proposed that the AI Act should apply only to providers and deployers of AI systems that are located outside the EU when the output produced by the AI system is intended to be used in the EU. The Commission’s proposal does not contain this more restrictive element of intentionality.

Toward Higher Fines?

The Parliament wants noncompliance with the AI Act to be subject to penalties of up to €40 million, or 7% of a company’s annual global turnover, whichever is the bigger amount. This is more than what the Commission proposed (€30 million or 6%, respectively). It remains to be seen how the Council will position itself.

The Parliament’s Focus on AI Autonomy

While the Commission’s proposed definition of AI has been criticized for being broad enough to include virtually all algorithms and computation techniques, the Parliament’s version emphasizes that AI operates with varying levels of autonomy. The Council will likely favor a narrower definition of AI.

A Risk-based Approach Revisited

The AI Act relies on a risk-based approach. There are four levels of risk, each with different requirements. Only the riskier AI systems might be affected by the Parliament’s differing position.

  • Minimal-risk AI, such as AI-enabled video games, would fall outside the scope of the AI Act.
  • Limited-risk AI, such as chatbots, would be subject to transparency requirements (e.g., users should know that they are interacting with a machine).
  • High-risk AI would be subject to a conformity assessment requirement before placement of the product on the market. The conformity assessment should confirm that the AI system builds on an adequate risk assessment, proper mitigation systems and high-quality datasets. It should also confirm the availability of appropriate compliance documentation, traceability of results, transparency, human oversight, accuracy and security. This category was intended to capture AI systems used in critical infrastructures that could put lives at risk, training that may determine access to education, safety components of products, employment scenarios, essential private and public services, law enforcement, border control management, and a justice setting.

The Parliament is seeking to expand the classification of high-risk areas to include AI systems that may cause harm to people’s health, safety and fundamental rights, as well as the environment; AI systems used to influence voters in political campaigns; and AI systems used by very large social media companies to determine what content to promote or demote among users. It remains to be seen how the Council will react to the Parliament’s suggested amendments.

  • Unacceptable-risk AI would be banned. Examples of prohibited AI systems in the Commission’s proposal include toys using voice assistance. 

The Parliament is seeking to ban real-time remote biometric identification systems in publicly accessible spaces; post-remote biometric identification systems (except for law enforcement in certain conditions); biometric categorization systems using sensitive characteristics, such as gender or ethnicity; predictive policing systems; emotion recognition systems in law enforcement, border management, the workplace and educational institutions; and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

Contrary to the Parliament, the Commission and the Council want to allow law enforcement to use real-time remote biometric identification systems. This is seen as a key stumbling block in forthcoming negotiations between the Parliament and the Council.

Foundation Models and Generative AI Systems

  • Foundation models. The Parliament is seeking to add obligations related to the provision of foundation models, or AI systems trained on broad data at scale and designed for generality of output that can be adapted to a wide range of distinctive tasks. Providers of such models would have to assess and mitigate possible risks and register their models in an EU database before the models are released in the market.
  • Generative AI systems. Generative AI systems based on foundation models, like ChatGPT, would have to comply with additional transparency requirements, such as disclosing that the content was generated by AI, designing the model to prevent the generation of illegal content, and publishing summaries of copyrighted data used for training. 

The Council is likely to agree that foundation models need to be regulated, but whether it will agree with the Parliament’s proposed language is unclear.

Next Steps

Despite the difficult political discussions ahead, there is a good chance that the AI Act will be passed before the end of the year. Even if that happens, the AI Act is unlikely to be effective before mid-2025. However, the act’s complex and far-reaching requirements are such that companies will want to initiate their compliance program well before the AI Act becomes effective.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.