Protecting fundamental rights in the age of AI: recommendations from the Human Rights Defender

0
Protecting fundamental rights in the age of AI: recommendations from the Human Rights Defender

While the European Commission is working on a draft legal framework for artificial intelligence systems, the French Human Rights Defender Claire Hédon is calling for special attention to be paid to discrimination resulting from algorithmic processing. According to the authority, fundamental rights must be at the heart of the legal framework around AI.

AI systems need to be supervised

AI systems have become widely used and democratized, especially since the health crisis. However, the algorithms on which they are based are not necessarily made known to the general public. Their use in high-risk areas (social services, police, justice, human resources, etc.) makes them as much a source of progress as they are a risk for fundamental rights.

Moreover, research has shown that discriminatory biases can occur during their design and deployment. In the field of recruitment, for example, gender bias has been identified in algorithms used to sort resumes. According to the Human Rights Defender, these algorithms are ” the mathematical translation of historical discriminatory practices “. For equal skills, these algorithms almost systematically rejected women’s applications in favor of those of men. It is for this reason that the giant Amazon separated from one of these systems in 2015.

As the Defender of Rights reminds us, algorithms reflect the structural biases of our society:

algorithms are developed by humans and therefore from data reflecting human practices

.”
Thus, she published an opinion entitledFor a European AI that protects and guarantees the principle of non-discrimination. The recommendations were co-written with Equinet, the European network of equality bodies. They are in line with the work of the institution. They emphasize the priority of fighting against algorithmic discrimination by providing a solid legal framework and solutions for recourse. The Human Rights Defender also insists on the role that European equality bodies could play in this context.

Fighting against algorithmic discrimination: the recommendations of the Human Rights Defender

The Human Rights Defender recommends that the European Commission make the principle of non-discrimination a central concern in any regulation dedicated to AI. She proposes the following actions:

  1. Make the principle of non-discrimination a central concern in any European regulation dedicated to AI.
  2. Establish in all European countries accessible and effective complaint and redress mechanisms for affected individuals.
  3. Apply a fundamental rights approach to defining “harm” and “risk”.
  4. Require equality impact assessments at regular intervals throughout the life cycle of AI systems.
  5. Assign “equality duties” enforceable against all AI developers and users.
  6. Make risk differentiation possible, only after a mandatory analysis of the impact on the non-discrimination principle and other human rights.
  7. Require new national supervisory authorities to consult with equality bodies and relevant fundamental rights institutions.
  8. Require the establishment and funding of cooperation mechanisms. These will allow the different bodies involved in the implementation of the AI regulation to coordinate at both European and national levels.

The Human Rights Defender wished to recall a paramount requirement. At a time when An unprecedented regulation is taking shape at the European CommissionAt a time when the right to non-discrimination must be systematically respected and the rights of all must be ensured, the Defender of Rights wishes to remind us of a fundamental requirement.

Translated from Protéger les droits fondamentaux à l’ère de l’IA : les recommandations de la défenseure des droits