The Confiance.ai collective unveils the four winners of its call for SHS expressions of interest

0
The Confiance.ai collective unveils the four winners of its call for SHS expressions of interest

The technological pillar of the Grand Challenge “Securing, certifying and making reliable systems based on artificial intelligence” launched by the French government, Confiance.ai is the most important technological research program of the #AIforHumanity plan, which aims to make France a leading country in AI. This collective has just presented the four winners of the AMI it launched last April for the Humanities and Social Sciences research community, which is working in particular on AI for trust.

Launched in 2021, Confiance.ai is led and coordinated by the IRT SystemX, founded by a group of 13 French industrialists and academics (Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales, Valeo, as well as the CEA, Inria, the IRT Saint Exupéry and the IRT SystemX) and has a budget of €45 million over four years. It aims to meet the challenge of industrializing AI in critical products and services.

This ambition is structured around 5 axes:

  • AI characterization,
  • Trusted AI by design,
  • Data and knowledge engineering,
  • Mastering AI-based systems engineering,
  • Trusted AI for embedded systems.

During the four years of this program, the partners will also work to remove many scientific barriers.

The collective has already launched several AMI since its creation, one of which concerned scientific challenges, which allowed 11 laboratories to join the collective on the Paris Saclay or Toulouse sites to contribute to the maturation of scientific work or to the resolution of upstream scientific barriers, most often in the form of a thesis or doctorate. It has also been joined by 12 start-ups and innovative SMEs following an AMI in July 2021 and currently has about 40 members.

THE SHS AMI

The objective of this call was to solicit laboratories in the Humanities and Social Sciences to accompany the resolution of the scientific barriers of the Confiance.ai program with an SHS approach. The scientific challenges were related to the following three themes

  • trust and system engineering with AI components;
  • trust and learning data;
  • trust and human interaction.

It aims at completing the technological developments of the program by works, entirely financed by this one, on the appropriation of the AI of confidence by those who will be designers, users and customers.

The eleven proposals received were evaluated by a selection committee comprising representatives of the program and two external personalities. The committee then held seven hearings and finally recommended four proposals to the Confiance.ai steering committee, which followed these recommendations.

The winners will work on use cases provided by the Confiance.ai program’s industrial partners. Most of the work will begin in September 2022 and will provide recommendations and additional elements to the program in early 2023. Some of these proposals may lead, depending on the needs identified, to research work in the form of theses or post-doctoral fellowships.

The official launch event of the work should be held in September 2022.

The four winning proposals

Benoit Leblanc (ENSC) & coll. with ONERA and the Cognition Institute: Experimentation of the trust of an AI system user

The proposal goes in the direction of exploring the reactions of individuals to systems using AI; trust being a pillar of these reactions. The study of these reactions brings scientific elements of interest both for the industrialization of anthropotechnical systems such as transportation devices, but also for the deployment of these systems in application areas.

Enrico Panaï (EM Lyon) & Laurence Devillers (LISN-CNRS) : Mapping the moral situation : Analysis of use cases

To build trust, the authors propose to delimit the space of action and to identify its constituent elements. One of the most interesting methods consists in mapping the space in which the action is carried out. This process allows to position moral situations at an appropriate level of granularity in order to recognize sets of actions at the individual, social, organizational or man-machine interface levels.

Marion Ho-Dac (CDEP, Univ. Artois) et al, with AITI Institute: Respect of the European Union values by design by AI systems

CDEP brings expertise specifically drawn from legal sciences, focused on compliance with the broadly understood EU legal framework, including in particular the values of the Union (as defined in Article 2 of the EU Treaty), the EU Charter of Fundamental Rights and the legal framework of the European judicial area.

Arnaud Latil et al (SCAI): The interfaces of algorithmic systems: what information should be communicated to generate trust?

The authors propose to focus their analysis on the interfaces of algorithmic systems. The aim is to study the effects on trust of legal messages communicated by the producers of AI systems.

Translated from Le collectif Confiance.ai dévoile les quatre lauréats de son appel à manifestations d’intérêt SHS