Impact AI Collective Unveils Practical Guide for a Trustworthy AI

0
Impact AI Collective Unveils Practical Guide for a Trustworthy AI

This week, the Impact AI collective presented a practical guide for companies to implement trustworthy artificial intelligence (AI). Start-ups, SMEs, CAC 40 companies, consulting firms and academic actors participated in the design of this guide. The think tank also approached the French government to promote the emergence of responsible AI.

Created in 2018, the Impact AI think tank aims to shed light on the ethical and societal issues related to artificial intelligence and to support virtuous initiatives for tomorrow’s world. This week, it published a practical guide for a trustworthy AI that brings together

This unprecedented initiative brings together French companies that are leaders in their fields. They share their feedback and work together to make trusted AI synonymous with French AI, while maintaining a high level of performance and economic results.

Discover the complete guide: www.impact-ai.fr/guideiaconfiance/

A guide to lead the French ecosystem towards a trustworthy AI

While the success of AI depends on the trust its users place in it, one third of French people still do not trust AI (IFOP Survey for Impact AI – Nov 2019). Artificial intelligence is rapidly deploying throughout the economy, so it is more necessary than ever to disseminate the principles and practices of ethical and responsible AI.

This is the purpose of this guide, in which Impact AI members set out and demonstrate, from general principles to operational practice, practical modalities for the implementation of AI governance within the company.

In line with the ethical principles established by the European Union and the OECD, some thirty feedback documents illustrate how to design, produce and manage reliable AI systems. The strength of this guide lies in its concrete approach, which each company, at its own level and according to its context, can adopt.

A self-assessment tool also allows them to easily assess their governance arrangements. All the good practices are grouped under 4 themes: ethical principles, sponsorship, governance model, protocols and tools. Each of them is evaluated through 5 levels of maturity.

A unique initiative in Europe

Prefaced by Renaud Vedel, national coordinator for artificial intelligence, this guide is valuable both for AI-using companies and for the government, which wishes to compare the concrete implementation of a trustworthy AI with the theory of French and European regulations in order to make them evolve.

There are many challenges for organizations and companies: while the use of artificial intelligence provides many competitive advantages, it can also – when not used properly – cause risks to corporate strategy, managerial risks with employees and reputational risks.

The numerous feedbacks listed in the guide provide proof of sufficient maturity to generalize this type of AI.

“Even if we don’t have all the technical answers to guarantee a flawless AI, its use and value are a reality. French organizations are already demonstrating a genuine willingness to implement mechanisms to manage AI responsibly. This is the reason why Impact AI has shaped this guide: to allow everyone to approach the subject in a responsible and operational way. More than ever, this mission is indispensable, and the continuation of the effort is vital. “Marcin Detyniecki, Head of Research and Development & Group Chief Data Scientist at AXA, in charge of the Impact AI working group.

Impact AI members who contributed to this guide :

  • AXA
  • CEA
  • Deloitte France
  • Institute of Actuaries
  • MAIF
  • Orange
  • Schneider Electric
  • Substra
  • Axionable
  • Inter’Elles Circle
  • Grenoble EM
  • Macif
  • Microsoft
  • PwC France
  • SNCF Network
  • Thales

Translated from Le collectif Impact AI dévoile un guide pratique pour une IA digne de confiance