The Confiance.ai collective takes stock of the scientific and technological advances of its program dedicated to AI trust in critical systems

0
The Confiance.ai collective takes stock of the scientific and technological advances of its program dedicated to AI trust in critical systems

In January 2021, the Confiance.ai program, implemented as part of the Grand Défi “Securing, making reliable and certifying systems based on artificial intelligence” and the Programme d’investissements d’avenir (PIA), was launched. The first significant results were unveiled the following October in Toulouse. This year, the Confiance.ai Days are being held at CentraleSupélec on the Paris-Saclay campus, where the members of the collective have come to share the scientific and technological advances of the program, feedback from the first deployments with partners, and future prospects.

Led by the IRT SystemX, the Confiance.ai program is driven by a group of 13 French industrialists and academics (Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales, Valeo, as well as the CEA, Inria, IRT Saint Exupéry and IRT SystemX) and has a budget of €45 million over four years. It aims to meet the challenge of industrializing AI in critical products and services, where accidents, failures or errors could have serious consequences for people and property.

The objective of this program, over the period 2021 to 2024, is to design and propose a platform of software tools dedicated to the engineering of innovative industrial products and services incorporating AI.

The first targeted sectors are automotive, aeronautics, energy, digital, industry 4.0, defense and maritime, with various applications such as online industrial control, autonomous mobility or decision support systems

A collective of 50 partners

The program aims to federate an ecosystem of industrial and academic partners and research laboratories around the ambition to make France a leader in trusted AI. Thus, around the 13 founders, a collective of nearly 50 industrial and academic partners was quickly formed:

  • Twelve start-ups and SMEs that won theAMI inviting them to participate in the priority actions of the national AI strategy have come to enrich the program’s work with their simulation, human-computer interaction, testing and explainability technologies;
  • A doctoral program of 8 theses and 4 postdocs has also been set up at the end of 2021, following an academic call for proposals.
  • The laboratories IRIT Toulouse, Onera, Inria, Cristal CNRS, Lamih – Lille, LIP6 – Sorbonne University, IMT – Toulouse, U2IS-ENSTA, LITIS – INSA Rouen, CRIL – Université Artois, through an AMI on Human and Social Sciences launched in 2022.
  • Other strategic partnerships have been initiated such as the one with ANITI in September 2022 to mature the knowledge produced by ANITI around certifiable and hybrid AI in contact with industrial use cases.

Contributing to the operational implementation of the future AI-Act

The EU, through its draft regulation on AI, the AI Act, wishes to frame AI and its uses, to make it trustworthy, ethical, sustainable, inclusive and human-centered. It thus aims to establish a regulatory and legal framework that will govern AI technologies designed within member countries but also those of operators dealing with them, by classifying them into 4 categories according to their level of risk.

The Confiance.ai program will contribute to the operational implementation of this future regulation by industrialists, by offering them a technical environment that will guarantee a high level of trust in AI-based products and services.

2021 : The first version of the trust environment

The program addresses the main themes around which the Trusted AI is built: methods and guidelines for the design of Trusted AI, characterization and evaluation of Trusted AI-based systems, design of Trusted AI-based models, data and knowledge engineering, certification and IVVQ, and embeddability.

  • Interviews were conducted with the industrial partners of the program who confirmed that despite the interest in AI and its potential, its adoption is much more timid. The concerns are mainly related to the lack of a reliable and efficient design framework on which they could rely and refer to;
  • A top-down approach (definition and very detailed specification of the operational needs of the program and the expected capabilities of the trust environment) and a bottom-up approach (selection and testing of the relevance of existing components, libraries, model algorithms, etc.) have been implemented. About twenty consequent scientific statements of the art were thus constituted on the various topics addressed by the AI of trust: monitoring, data and knowledge engineering, symbolic artificial intelligence or the characterization of the notion of trust.
  • Contributed by the partners, 11 initial use cases concerning real operational problems offer the teams a concrete repository of constraints, models, data and objectives on which they can base their work, test the various technological and methodological components identified in order to validate or not their relevance in the context of trusted AI.

A first version of the trust environment was delivered in late 2021. It includes a development environment backed by an MLOps chain (chain for processing and automatically deploying machine learning models). It is available to the teams and serves as a working environment for the program.

Early 2022: Successful deployment in partner engineering departments

In the spring of 2022, the first version of the trust environment was made available to manufacturers. Some of them have already redeployed it, which allows us to have initial operational feedback.

For Safran, the initial feedback has been very positive, and we are now looking to extend the experimentation of the environment by implementing it on an internal use case.

Jacques Yelloz, chief engineer in the AI domain at Safran Electronics & Defense, states:

“We were able to easily adapt the installation procedure of the trust environment with the support of Confiance.ai. Our team installed this environment on one of our calculation servers in an “On Premise” logic. This operation is strategic for us because of the sensitive nature of our activities, as we now have the opportunity to apply the bricks of this environment to our internal use cases without having to use a public cloud. In the coming months, we plan to evaluate the interoperability of MLOps tools with the explicability and robustness tools developed by Confiance.ai for our use cases. We are looking forward to version 2 with the new features announced, particularly around data.”

For Sopra Steria, it is the possibility of redeploying the program’s assets individually that represents a decisive added value for the company’s activities.

Yves Nicolas, Deputy Group CTO of Sopra Steria, explains:

“The 2022 work of trust.ai is already enabling us to realize the promise of a trustworthy AI that can be deployed in production. On several business use cases, we have been able to evaluate several trust parameters such as explicability and robustness within an industrial MLOps chain, ready to comply with future regulations such as the AI Act.”

Finally, for Renault, which is carrying one of the program’s reference use cases, the partner’s interest lies in the effective implementation of the first results.

Antoine Leblanc, AI @ industry 4.0 / DSII / PESI expert at Renault Group, comments:

“The challenge of adopting and integrating AI solutions in industrial systems is all the more important as it is accompanied, for Renault Group’s Manufacturing teams, by a change in culture and methods. The Confiance.ai program provides us with turnkey tools, tested on industrial use cases, proposed by our teams, which allow us to consolidate our global approach to industrial data management. Whether it’s help with annotation quality, data visualization or measuring the social acceptability of AI on an industrial workstation, the solutions proposed by the Confiance.ai program partners strengthen the robustness of our processes and the time it takes to use the data.”

End of 2022: A second version of the trust environment and four platforms dedicated to the major issues of trusted AI

Work in 2022 is focused on the following issues:

  • Rise in maturity of robustness, explicability, monitoring or even the life cycle of data, characterization of trust;
  • Embeddability of AI components, compliance with repositories;
  • Insertion of new engineering processes, in this case dedicated to the design of trusted AI, in the engineering workshops implemented at the industrial level, a central study subject since it guarantees the in-fine usability of the trusted environment.

In 2022, the environment proposes, in particular, four platforms dedicated to major issues in trusted AI:

  • A platform dedicated to the management of the data life cycle (acquisition, storage, specifications, selection, augmentation);
  • A set of libraries dedicated to the robustness and monitoring of AI-based systems. These libraries ensure that the system and its AI component evolve in the previously defined context (Operational Domain Design);
  • A platform dedicated to explicability, whose objective is to render in terms understandable by a human the choices and decisions made by an AI;
  • A platform dedicated to the embeddability of AI components, which must allow, on the one hand, to identify, on the basis of the hardware specificities of the target system, the design constraints to be respected, and on the other hand, to accompany the component all along its realization, until its deployment in the system.

To date, more than 100 software components (applications, libraries, etc.) are being designed within the framework of the program, at different levels of maturity. Progressively evaluated and integrated, they are also made available to the partners to allow their manipulation in their own engineering workshops.

Translated from Le collectif Confiance.ai fait le bilan des avancées scientifiques et technologiques de son programme dédié à l’IA de confiance dans les systèmes critiques