As part of France 2030, the French government has launched a Grand Challenge aimed at " Securing, certifying and making reliable systems based on artificial intelligence," led by the General Secretariat for Investment (SGPI) and funded by the Plan d'Investissement d'Avenir. AFNOR, the French standards association, has been mandated to " create the normative environment of trust accompanying the tools and processes for the certification of critical systems based on artificial intelligence". It recently published a strategic roadmap in which it presents six key areas for AI standardization.
Developing trusted AI is essential. To this end, the French government has committed €1.2 million, under the Future Investment Program (PIA) and the France Recovery Plan, to facilitate the creation of consensual and globally accepted standards.
Cédric O, Secretary of State for Digital Transition and Electronic Communications, has moreover declared:
"Trusted AI, i.e. AI for critical systems, is needed today in many fields such as autonomous cars, aeronautics or space."
This last pillar is entrusted to AFNOR, which includes many players in the AI ecosystem, with the aim of creating synergies in France, with other countries in the framework of the International Organization for Standardization (ISO) and with other international consortia.
To structure the ecosystem, the association will set up a platform for cooperation between French AI players, strategic actions in standardization and develop European and international cooperation.
A national lack of understanding of standardization
French companies do not all understand the strategic importance of standards, especially start-ups, SMEs and ETIs, which, insufficiently integrated into the ecosystems of standardization do not measure the stakes. Economic players seem to be disinterested in standards, even though they are concerned about the application of regulations and compliance.The experts of the companies concerned contribute to the elaboration of standardization rules in a direct way, at the national level as well as at the European and international levels, which will serve as technical support to the European regulations.
This European regulation is part of the continuity in Europe of the Data Governance Act presented in November 2020, the RGPD active since 2018 or the study of the role of AI in the Green Deal, carried out by the European Parliament.
AFNOR's roadmap
260 French AI players took part in the consultation conducted in the summer of 2021 to establish this AI standardization strategy. All companies in the ecosystem will be able to participate in the development of standards within the standardization committees.Patrick Bezombes, chairman of the French standardization committee, assures:
"The contribution is not reserved for large groups, quite the contrary. Start-ups and SMEs are an essential part of the ecosystem, and they must make their voices heard and give their point of view: the directions chosen will have a direct impact on them, right at the heart of their business.The roadmap includes 6 axes:
- Develop standards on trust
- Develop standards on AI governance and management
- An AI quality management system: ISO 42001 (AI management System);
- An AI risk management system: ISO 23894.2 (AI Risk management).
They could be imposed at the global level just like ISO 9001, which is now an international reference in quality management, and become harmonized European standards.
- Develop standards on AI oversight and reporting
Reporting processes will allow major incidents to be brought up in order to treat them in real time before they spread. In case of incidents and accidents, audits will be conducted on the products and the standards on which they are based.
- Develop standards on the competencies of certification bodies
- Develop standardization of certain digital tools
- Simplify access to and use of standards