On February 19, 2020, the European Commission (“EC”) published a series of documents outlining its vision for Europe’s digital future.1 Amongst these, the EC’s White Paper on Artificial Intelligence: A European approach to excellence and trust provides a roadmap of the rules that businesses may have to comply with in the near future. The EC’s plan is twofold.
- First, the EC wants to develop a regulatory framework for AI that would follow a risk-based approach.
- Second, the EC wants to ensure that existing EU legislation is adapted and prepared for new AI challenges.
The EC invites all businesses to share their views on the proposals set out in the White Paper by May 19, 2020 through a public consultation available here.
These proposals are relevant for technology companies, but also for any company offering or using AI-enabled products or services, e.g., in sectors such as healthcare, transport, financial institutions and energy. They will have a major impact on the way companies design and operate their AI systems and the way they put them on the EU market.
Summary of the EC’s AI Proposals
The EC recognizes that AI is expected to bring massive benefits to society, but also discusses the risks that AI could raise. The White Paper takes the view that the main risks related to the use of AI concern EU fundamental rights, as well as safety and liability-related issues.
Risks for EU fundamental rights might result from the lack of human oversight, the use of data without correcting bias, or the way AI is used to perform content moderation. AI could also affect individuals’ personal data.
Risks for safety and liability issues could arise, for example, if an autonomous car wrongly identifies an object and causes an accident. Who is liable in such a case?
Dimensions of a specific AI regulatory framework in the EU
- Which Sectors Would Be Regulated? The EC considers that a specific AI regulatory framework should only apply to sectors where AI could raise a high-risk, such as healthcare, transport and energy, and where AI is used in such a manner that significant risks are likely to arise. So, even though healthcare is a relevant sector, a flaw in the appointment scheduling system of a hospital would not trigger the application of AI specific rules.
- Who is Concerned? The AI regulatory framework would apply to the actors who are best placed to address any potential risks, so it would not be limited to developers but could apply to any company using AI.
- What Would Be the Geographical Scope? The AI rules would apply to all economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU.
- What Would Be the Rules? The AI regulatory framework would be based mainly on six requirements.
- Training data. Because the functioning of AI systems depends on the data set on which they have been trained, AI systems should be trained on data sets that are sufficiently representative and broad and that cover all relevant scenarios to avoid discrimination and dangerous situations.
- Keeping of records and data. There may also be requirements to keep records of the data sets (and sometimes the data sets themselves), the documentation on the programming and the methodologies used to train and test the AI systems. This is meant to facilitate authorities’ supervision and enforcement.
- Information Provision. There may be a requirement to provide information as to the AI systems’ capabilities and limitations, and to clearly indicate when people are interacting with an AI system.
- Robustness and Accuracy. The EC is considering requirements designed to ensure that AI systems are robust and accurate, and that outcomes are reproducible.
- Human Oversight. The EC considers that guaranteeing human oversight of AI systems’ decisions is highly important. However, the EC concedes that the appropriate type and degree of human oversight may vary from one case to another. The EC has identified four types and degrees of human oversight. The issue will be how to determine which option should apply to a specific application. The four situations are these:
- The output of the AI system does not become effective unless it has been previously reviewed and validated by a human being (e.g., only a human being may reject an application for social security benefits);
- The AI system’s decision is immediately effective, but human intervention is ensured afterwards (e.g., an AI system rejects an application for a credit card, but human review remains possible afterwards);
- A human being monitors the AI system and can intervene in real time (e.g., an emergency button in a driverless car);
- The AI system is designed to ask a human to take over in certain conditions (e.g., an autonomous car stops operating when sensor reliability is affected).
- Remote Biometric Identification Requirements. The collection and use of biometric data for remote identification purposes is already subject to significant restrictions under the European General Data Protection Regulation. The EC is now planning to launch a debate on the circumstances, if any, which might justify the use of AI for such purposes. An earlier version of the EC White Paper, which had leaked to the press, called for a three- to five-year ban on the use of AI for remote biometric identification purposes.
- How Would Compliance Be Controlled? The EC considers that high-risk AI applications should be subject to a prior conformity assessment. This assessment would consist of testing, inspection, certification, and checks of the algorithm and of the data sets used in the development phase to ensure compliance with the above-mentioned requirements.
Changing Current EU laws to Face the New AI Challenges
The EC may change its rules to ensure that they can address the risks arising from AI. In particular, the EC is considering the following changes.
- Effective Application and Enforcement of Existing Laws. In the EC’s view, the opaqueness of AI may make it difficult for authorities to identify and prove possible breaches of laws. Legal amendments, including transparency requirements, might be needed to ensure the effective application and enforcement of existing laws to AI systems.
- AI Products and Services. The EC wants to ensure that EU safety legislation applies to both AI-enabled products and services. Currently, EU safety laws only apply to products.
- Changing Functionality of AI Systems. The EC also takes the view that EU legislation should address safety risks raised by AI products and services during their whole lifecycle. Currently, EU laws predominantly focus on safety risks that are present at the time of placing on the market.
- Responsibilities and Concept of Safety. The EC would like to clarify liability rules for all players in the AI supply chain and for all AI-specific types of risk, such as cyber threats or risks that result from the loss of connectivity.
By June 2020, the EC plans to publish an assessment list to help companies verify the application of the key AI requirements identified above.
It is unclear when the EC proposals will be turned into formal legislative ones, and when they would become law in Europe. It is to be expected that the proposals will be much debated in terms of proportionality, especially regarding the freedom to conduct business and respect for business secrets, two fundamental values in Europe.
However, the process to regulate AI in Europe has clearly begun. This is a key time to start thinking about how to react to the EC’s plans and, if likely to be affected, to provide comments to the EC’s consultation.