Europe: Official announcement of the draft regulation on artificial intelligence

0
Europe: Official announcement of the draft regulation on artificial intelligence

On April 21, 2021, the European Commission officially announced its plans for regulations and actions “to promote excellence and trust in artificial intelligence.” This announcement comes two months after the publication of the AI white paper in February 2021 and the implementation of a broad participatory consultation. This is the first time in the history of the European Union that a legal framework on AI will be put in place.

Initial background

Last week, a first sketch of what the draft EU regulation on AI could have looked like was unveiled by several media outlets. Let’s quickly review all of this information:

According to several sources, the draft regulation aimed to distinguish “beneficial” AI systems from those that could cause problems in the future. These systems would be considered as “high risk” systems under this legislation. This would include AI systems used for “indiscriminate surveillance” operations. The European Union would therefore like to ban them and would aim to put in place drastic rules: Companies and institutions continuing to use these “high-risk” systems could be fined up to 4% of turnover, with a ceiling of 20 million euros.

Today, the European Commission commented on the announcement of its draft regulation “for excellence and trust in artificial intelligence”: “The establishment of the first-ever legal framework on AI and a new plan coordinated with Member States will ensure the safety and fundamental rights of people and businesses, while strengthening AI adoption, investment and innovation across the EU. New rules on machinery and equipment will complement this approach by adapting safety rules to increase user confidence in the new versatile generation of products.”

Several levels of risk

Following the announcement of the new European regulation on artificial intelligence, the European Commission proposes an approach to AI systems according to several levels of risk. Each risk level corresponds to a more or less proscribed use of the regulated AI system. Below, find the growing and exhaustive list of the four levels of risks exposed in the framework of this regulation:

  • Minimal-risk AI systems: EU legislation allows the free use of applications such as video games or spam filters based on AI. The draft regulation does not foresee any intervention in this area, as these systems pose little or no risk to the rights or safety of citizens.
  • Limited risk AI systems: This level of risk includes AI systems to which specific transparency obligations apply: When using AI systems such as chatbots, users need to know that they are interacting with a machine so that they can make an informed decision about whether to proceed.
  • High-risk AI systems: these are AI systems considered “high risk”. These include: AI technologies used in critical infrastructure, education or vocational training, product safety components, access to self-employment or non-employment, workforce management, essential private and public services, law enforcement, management of migration, asylum and border controls, administration of justice and democratic processes.

These AI systems can be used and placed on the market if they comply with the following strict obligations:
– adequate risk assessment and mitigation systems
– high quality of the datasets feeding the system to minimise risks and discriminatory outcomes
– recording of activities to ensure traceability of results
– detailed documentation providing all necessary information on the system and its purpose to enable the authorities to assess its compliance
– clear and adequate information for the user;
appropriate human control to minimise risks;
high level of robustness, security and accuracy.

  • Unacceptably risky AI systems: AI systems that are considered a clear threat to the safety, livelihoods and rights of individuals will be prohibited. Included in this category are AI applications that manipulate human behavior in order to deprive users of their free will or AI systems that enable social rating by states.

Coordinated plan announced

This new regulation is also accompanied by a coordinated plan between the member states of the European Union. The objectives of this plan are multiple. It will be to create conditions conducive to the development of AI, to promote excellence in AI, to ensure that AI is at the service of citizens while making it a positive force for society and finally, to establish a strategic leadership in sectors and technologies with high impact such as ecology, sustainable production, agriculture, public sectors or mobility.

The European Commission is also proposing an approach to new equipment and machinery. This new machinery regulation will ensure that the new generation of machinery offers the required safety to users and consumers and will encourage innovation. While the AI regulation will address the safety risks of AI systems, the new machinery regulation will ensure that AI systems are safely integrated into machines.

Margarethe Vestager, Executive Vice President for a Digital Age Europe, said of the EU initiative:

“When it comes to artificial intelligence, trust is not a luxury but an absolute necessity. By adopting these landmark rules, the EU is taking the lead in setting new global standards that will ensure that AI is trustworthy. By setting the standards, we can pave the way for ethical technology worldwide, while preserving the EU’s competitiveness. Future-proof and innovation-friendly, our rules will apply when strictly necessary: when the safety and fundamental rights of EU citizens are at stake.”

Translated from Europe : annonce officielle du projet de réglementation en matière d’intelligence artificielle