Artificial Intelligence Assessment Forum: Building trust and validating performance, or how to define an enabling environment for AI development

0
Artificial Intelligence Assessment Forum: Building trust and validating performance, or how to define an enabling environment for AI development

The Laboratoire National de Métrologie et d’Essais (LNE) is organizing the first forum on the evaluation of artificial intelligence (AI). This event will be an opportunity to discuss the development of new AI measurement and evaluation methods and to recall their importance. A central topic that will be addressed during several roundtables, exchanges and sharing of experiences with topics such as the benefits of the evaluation of AI, the existing and to be developed evaluation platforms, the implementation of a trusted evaluation or the regulation.

The Villani report on artificial intelligence presented to the government on March 28, 2018, advised in particular to appoint a “Mr. AI”. Bertrand Pailhès was thus appointed interministerial coordinator of artificial intelligence, and then Renaud Vedel was chosen by Edouard Philippe to succeed him in March 2020. The importance of developing new methods for measuring and evaluating AI has always been a central concern and LNE plays a strategic role in this project, particularly for AI certification.

The first AI evaluation forum

The 1st Artificial Intelligence Evaluation Forum will take place on 24 November at the Cité des Sciences in Paris. Cédric Villani, MP for Essonne and President of the Parliamentary Office for Scientific Choices, Renaud Vedel, Prefect and Coordinator of the National Strategy for Artificial Intelligence, and Thomas Courbe, Director General of Enterprises, will be present.

The use of artificial intelligence systems is accelerating and promises more effective decision-making in increasingly rich and complex environments. The evaluation of AI is therefore necessary to guarantee its reliability and performance, but also to promote its acceptability by society.

The evaluation of AI is therefore essential both for the user (selection, reception, appreciation of functionalities) and for the designer (design and development, certification). And beyond the efficiency of the systems themselves, it is necessary to address issues of safety, ethics or quality of human-machine interaction, which are essential conditions for collective acceptability.

This theme will be at the heart of the 1st Forum on the evaluation of artificial intelligence, where all the economic, technological and regulatory aspects related to the national strategy will be addressed. Experts from the industrial and academic world as well as representatives of public authorities will be present and will exchange with the public.

Round tables, exchanges, sharing of experiences

Through the sharing of experiences, accompanied by explanations of good practices in the evaluation and qualification of intelligent systems, several topics will be addressed throughout the day:

  • The benefits of AI evaluation;
  • Existing and to be developed evaluation platforms;
  • How to develop a trusted evaluation and increase its acceptability;
  • The latest advances in regulation, standardization and certification.

This event will take place through round tables with the following topics

  • AI certification: an answer to increase the acceptability of AI systems with Thomas LOMMATZSCH, Head of the Information Technology Certification Unit, LNE; Katya LAINÉ, Director and President of the Numeum Innovation & Technology Committee, Co-Founder & CEO TALKR.ai by Kwalys; Pierre SELVA, Director of Conformity Assessment Strategy, Schneider Electric and Aymeric de PONTBRIAND, Managing Director, Scortex.
  • Evaluation platforms, essential tools for testing and making AI systems reliable with Jerôme PASCHAL, Technical Director, UTAC CERAM; Alexandre BOUNOUH, Director, CEA-LIST; Paul LABROGÈRE, General Director, IRT SystemX; Emmanuel BACRY, Scientific Director, Health Data Hub and Guillaume AVRIN, Head of the Artificial Intelligence Evaluation Department, LNE.
  • Evaluating AI: what are the stakes for the ecosystem? with Marc DARMON, Executive Vice President, Secure Information and Communication Systems, Thales; Etienne GRASS, Executive Vice President, Head of Citizen Services, Capgemini Invent; Gwendal BIHAN, Managing Director, Axionable; Alexandra BENSAMOUN, Professor of Private Law, Researcher at DATAIA, University of Paris-Saclay and Françoise SOULIÉ-FOGELMAN, Scientific Adviser, FranceIA Hub.

This event will count with the participation of: LNE, Capgemini Invent, Axionable, University of Paris-Saclay, Hub, France IA, UTAC CERAM, CEA-LIST, IRT System X, Health Data Hub, Schneider Electric, Numeum, SCORTEX, DGE, AFNOR.

Translated from Forum de l’évaluation de l’intelligence artificielle : Créer la confiance et valider les performances, ou comment définir un environnement favorable au développement de l’IA