From disease diagnosis to the discovery of new treatments, AI promises to improve the precision and efficiency of healthcare. While a third of them have already consulted a generative AI for medical advice, are the French ready to trust it and follow its recommendations blindly, without even their doctor's validation?
Last March, FLASHS questioned a panel of 2,003 French men and women aged 18 and over, representative of the French population, on behalf of MedTech Galeon to understand their perception of AI in the healthcare field. 

A Gradual Adoption Marked by Divisions

The survey reveals a growing yet unequal familiarization with these technologies. While 64% of respondents (70% of men versus 58% of women) state they have heard of medical AI, only 10% consider themselves to have an in-depth understanding. The divide is clear between genders but also across generations. Young adults aged 18-24 are much more inclined to use AI (68%), whereas only 10% of those over 65 have done so.
These disparities do not merely reflect differences in exposure to innovation. They may also indicate various attitudes towards risk, autonomy in medical decision-making, or trust in automated systems.

A Credible Technology... but in Search of Legitimacy

However, trust in these tools remains divided: 43% of French people grant some credibility to responses provided by AIs like ChatGPT or Google Gemini in healthcare, compared to 45% who remain skeptical. Total trust remains marginal (4%), while absolute mistrust is more pronounced (16%).
Nonetheless, the perceived potential is real: diagnosis (48%) and therapeutic research (47%) top the list of most relevant uses. More peripheral functions—such as administrative automation or epidemiological surveillance—generate less but still tangible interest. Only 13% believe AI adds no value to the healthcare sector.

Risk Management: Finding a Balance

Among those who have consulted an AI for health-related topics, six in ten say they followed the recommendations received. A significant portion (17%) applies AI advice without medical consultation, raising the question of stricter regulation in a sector historically governed by principles of responsibility, precaution, and traceability.
Moreover, the quality of input data, often heterogeneous and sometimes biased, directly affects the quality of recommendations produced by AI. Additionally, the opacity of algorithms used, especially in so-called "black box" systems, limits understanding of the generated decisions, even by healthcare professionals themselves.
While human error is generally contextualized relative to the complexity of the case, an algorithmic error is perceived as a system flaw, hence much less tolerated: only 9% of French people accept it, compared to 20% for human error.

Can AI Surpass Human Expertise?

When asked about AI's ability to surpass human skills, a relative majority adopts a nuanced stance. Only 12% believe it could become more reliable than doctors throughout the entire care pathway. However, more than one in two (53%) acknowledges that it could surpass professionals in certain targeted, primarily technical areas, such as medical image analysis or early detection of weak signals.

A Tool, Not a Substitute: The Demand for Transparency

The dehumanization of care is the main concern for the French (34%), followed by the risk of error (28%) and lack of human control (24%). While 30% of respondents say they are willing to accept a surgical operation performed solely by AI, 40% emphasize the necessity of a doctor's presence.
The French favor a hybrid approach: nearly half (49%) support their doctor using AI to refine a diagnosis or recommend treatment. A key takeaway from the study is the demand for transparency. For 4 out of 5 French people, it is essential to be informed of AI usage in their care, and nearly half wish to know the precise modalities. This demand spans all generations and is observed regardless of gender.
Ethical regulation and clarity of practices will thus be crucial to foster AI acceptability in the coming years in healthcare.

To better understand

What is a 'black box' AI system and why does it pose problems in medicine?

A 'black box' AI system is an algorithm with an opaque internal process that makes it difficult for users to understand and verify decisions, which is critical in medicine where transparency is essential for validation and accountability in care.