Dealing with the ethical and societal issues of AI: 3 questions to Hélène Chinal and François Taddei

0
Dealing with the ethical and societal issues of AI: 3 questions to Hélène Chinal and François Taddei

Impact AI, a collective of reflection and action made up of a group of actors in the field of artificial intelligence, united around two common objectives: to address the ethical and societal issues of AI and to support innovative and positive projects for the world of tomorrow, will organize the conference Explor’AI on June 10.

How to educate on AI and train people who are or will be involved in developing AI in an ethical and inclusive way? This is the subject of the opening round table of the day, at which François Taddei, Researcher, President and Founder of the Centre for Interdisciplinary Research (CRI) and Hélène Chinal, Head of transformation for SBU South & Central Europe of Capgemini, will speak. Central Europe of Capgemini, Vice President of Impact AI, will participate alongside Valérie Pehririn, AI and data specialist at Capgemini Invent, Damien Bourgeois, Head of Engineering, Expertise and Educational Innovation at Axa and Joel Courtois, Director of Epita.

Actu IA: Beyond technical skills, what can we say about the current situation of ethical and responsible AI?

Hélène Chinal: First of all, I think it is important to define what we mean by a responsible and ethical AI. Creating an artificial intelligence in an ethical way means making sure that this AI does not contain any bias, but also understanding that its use must not harm human beings. This is the meaning of the draft regulation at European level. AI works on data, if the data is biased, AIs reproduce these biases, and therefore existing prejudices in society. This is why ethics has become an essential point when considering the creation of an AI and therefore has its place at the heart of education.

François Taddei: Yes, except that in the vast majority of cases, ethics is not yet the subject of specific teaching in the same way as technical skills for example. For the moment, only individual ethical reflection is taken into account, and this capacity for ethical reasoning, for questioning, for questioning a model is not necessarily the most sought-after and most stimulated quality in a student at an engineering or computer development school. There is a real pedagogical work to be done to promote awareness of the biases. And this for all types of bias, gender, diversity etc.

ActuIA: What can be done to advance this awareness?

Hélène Chinal: This is one of the issues that CSR departments in companies are working on, a subject that is particularly close to my heart as I have been involved in gender and diversity issues for many years. In my opinion, we need to get to the root of things, to educate in order to deconstruct biases, not only to involve those who are already motivated and concerned, but also to go and find those who do not consider this to be a subject. Learning to recognise bias should be part of the curriculum of any higher education course. Within companies, training and involvement of managers is essential.

François Taddei: Absolutely, it’s easier to correct an algorithm than a bias! To understand that a database contains biases, you still need to know how to identify them, there is a real collective reflection to be carried out, for example by creating lists of databases and non-biased methods in order to equip developers and engineers with checklists. But the most important thing for me is to develop our capacity and that of the students to challenge the question posed. The question of meaning at the heart of ethical concerns is regressing. Those who are sounding the alarm are powerless to change the way the company operates at the moment. This is the current situation in Silicon Valley, where individuals who ask themselves ethical questions are rejected by the system.

ActuIA: There is a lot to do, where to start? Do you have any examples of what is being done in your respective organizations?

François Taddei: At the CRI, we put students in the position of being able to open black boxes, whether they are relational, emotional or algorithmic. And to do this, we let them be themselves. We encourage them to take care of themselves, others and the planet. Science and technology should not be an end in itself, but a means to an end.

Hélène Chinal: Within Capgemini, we have developed a code of conduct for artificial intelligence projects, and the Capgemini Research Institute recently conducted a study that tells us the importance of transparency but also the necessary pro-activity to take into account diversity and inclusion throughout the entire life cycle of AI to ensure a “human” AI. Beyond that, all our employees are trained in our ethical principles in line with our values.

Translated from Traiter les enjeux éthiques et sociétaux de l’IA : 3 questions à Hélène Chinal et François Taddei