Europe: the European Commission plans to regulate the use of ‘high risk’ AI systems

0
Europe: the European Commission plans to regulate the use of ‘high risk’ AI systems

As part of its new regulation on artificial intelligence, the European Union is considering strict rules. Among the points addressed are AI systems used for “indiscriminate surveillance” operations, which the European Commission wants to ban. Companies and institutions that do not comply with this new regulation could be fined up to 4% of their turnover, capped at 20 million euros.

The place of AI systems in this regulation

This Tuesday, April 13, several media have claimed to have obtained this draft regulation of AI in the European Union. This regulation would be one of the first of its kind, aiming to distinguish “beneficial” AI systems from those that could cause problems in the future. Those that streamline manufacturing, model climate change or make the energy grid more efficient would be welcome.

On the other hand, AI systems defined by the EU as “high risk” such as those used for “indiscriminate surveillance” operations would be banned. The proposal also seeks to ban AI systems that harm people by manipulating their behaviour, opinions or decisions, exploit or target people’s vulnerabilities, and for mass surveillance. If companies use such systems improperly, they will be fined up to 4% of their turnover (up to a maximum of €20 million).

The question arises, however, for other systems such as algorithms used to scan curricula vitae, help judges make decisions, assess creditworthiness or even regularise asylum applications and visas, which are nevertheless considered “high-risk” AI systems. These cases will be further examined by the European Commission, which wants to introduce tests of compliance with European standards before launching products on the market.

Possible exceptions

However, the European Commission would have envisaged to set up in its regulation, exceptions to the AI systems at “high risk”. Indeed, the competent authorities such as law enforcement officers could use these technologies if they fight serious crimes. For example, those used for remote biometric identification – such as facial recognition – in public places could be allowed if their use is time-limited and their geographical area of action is restricted to the place necessary for their applications.

The European Commission would like to specify that one of its requirements is to ensure that all data that may be collected under these exceptions do not include any intentional or unintentional bias that could lead to possible discrimination. In this sense, a European AI Committee will be created: it will include one representative per EU member state and one representative of the European Commission. This council will supervise the application to the letter of this new regulation and will be able to issue recommendations regarding the list of banned or accepted AI systems.

A European will against the current of the American intentions

US companies with subsidiaries or bases in Europe are likely to be subject to these new rules, while corporate surveillance practices were already one of the main sticking points in transatlantic collaboration.

The proposal is expected to be presented on 21 April by Margrethe Vestager, the commission’s executive vice-president for digital. It will follow initial analysis by the AI committee last year, which raised the basis for possible regulation against “high-risk” AI systems.

Following a multi-party letter from 116 MEPs highlighting the potential risks to fundamental rights posed by “high-risk” AI technologies, Commission President Ursula von der Leyen assured MEPs that the Commission would go further in this direction. This new regulation is therefore the answer to these demands.

Translated from Europe : la commission européenne prévoit de réglementer l’utilisation des systèmes d’IA à “haut risque”