- Éric Scherer, director of the MediaLab at France Télévisions and chairman of the News Committee of the European Broadcasting Union, engaged in AI ethics and governance issues;
- Stanislas de Livonnière, head of the Data and Innovation department at Parisien, an experimenter of new AI-generated narrative formats;
- Bénédicte Mingot and Jérémie Laurent-Kaysen, fact-checkers at France Télévisions, who daily explore AI uses in the fight against misinformation.
AI: Tool, Threat, or Revealer?
- What AI uses are currently integrated into journalistic practices?
- How can we ensure that generated content does not itself become sources of misinformation?
- And above all, what ethical and deontological safeguards need to be strengthened to preserve free, independent, and verifiable information?
Translated from Profession reporter : la Bpi explore l'impact de l’IA lors d’une rencontre à la Scam le 18 juin prochain
To better understand
What is generative AI and how does it work?
Generative AI is a subset of artificial intelligence that uses algorithms to create new and original content, such as text, images, or music. It often relies on deep neural networks, like transformer models, which are trained on large datasets to learn the underlying structures of language or visuals.
What are the existing regulatory frameworks for using AI in journalism?
Currently, there are few specific regulations for the use of AI in journalism. However, the European Union has proposed general AI guidelines, such as the AI Regulation, which requires transparency and accountability. Misinformation laws, like the Digital Services Act, may also impact AI usage to ensure generated content remains verifiable and reliable.