Are artificial intelligence tools designed to combat Covid-19 effective?

0
Are artificial intelligence tools designed to combat Covid-19 effective?

According to the Massachusetts Institute Of Technology Review , many tools have been developed to combat Covid-19. However, their performance has not lived up to the expectations of doctors and specialists: none of the predictive tools developed have made a real difference, and worse, some of them have had unwelcome and even potentially harmful effects. A look back at the AI tools on which researchers had high hopes to fight the Covid-19 pandemic, but which, in the end, did not have the desired effect.

A special situation that doctors had to adapt to

In March 2020, Covid-19 hit Europe with full force, first in Italy, then in France, where the disease caused the most victims and infected people, plunging hospitals into a serious health crisis that is still poorly understood today. Doctors and nurses, for whom the situation was a first, had no real idea how to manage all the patients, according to Laure Wynants, an epidemiologist at the University of Maastricht in the Netherlands, who studies predictive tools.

In China, the virus had already been present since December, and researchers in the country were already conducting studies and surveys to learn more about the virus and defeat what was then an “epidemic. Laure Wynants said at the time: “If there was ever a time when AI could prove its usefulness, it is now. I was hopeful”. The theoretical goal seems “simple”, train machine learning algorithms trained on Chinese data retrieved between December 2019 and March 2020 to help doctors better understand the virus and make good decisions to save lives.

Artificial intelligence serving doctors in the fight against Covid-19? Not quite…

Unfortunately, the goal was not reached: not because the effort was non-existent, far from it, since teams of researchers from all over the world were mobilized to help. Not because they failed to develop anything, since several hundred predictive tools were developed to help front-line staff better diagnose and triage patients according to their symptoms, for example.

It was not achieved because none of these tools made a real difference: this is what several studies have claimed, including one by the Alan Turing Institute, the UK’s national centre for data science and AI, which states that AI tools had little or no impact in the fight against Covid-19. These findings are in line with the results of a study published by Laure Wynants.

She and her colleagues looked at 232 algorithms used to diagnose patients or predict how sick people with the disease might get. They found that none of them were suitable for clinical use. Only two were identified as promising enough for future testing.

Specifically, what are the problems with AI tools in the face of Covid-19?

Many of the problems discovered are related to the poor quality of the data used by researchers to develop their tools. Information about Covid-19 patients, including medical tests, was collected and shared in the midst of a global pandemic, often by doctors struggling to treat these patients.

Researchers wanted to help quickly, and these were the only publicly available data sets. But that meant that many tools were built using mislabeled data or data from unknown sources. This was noted by Derek Driggs, a machine learning researcher at the University of Cambridge, in his study of deep learning models dedicated to Covid-19 diagnosis.

Both teams (Laure Wynants’ and Derek Driggs’) found that researchers were repeating the same basic errors in the way they trained or tested their tools. Incorrect assumptions about the data often meant that the trained models did not work as expected. Driggs highlights the problem of what he calls “Frankenstein datasets”: these are assembled from multiple sources and can contain duplicates. This means that some tools end up being tested on the same data they were trained on, which clearly does not help the model.

What solutions for the future?

Of course, researchers remain convinced that AI can help in this kind of exceptional situation. The most common mistake will have been not sorting the datasets made available to the researchers. Another aspect to raise is ironically that of selfishness: researchers have not shared their AI models and tools so that others can test them and build on them to create better ones. Laure Wynants explains:

“The models are so similar – they almost all use the same techniques with minor adjustments, the same inputs – and they all make the same mistakes. If all the experts who make new models would instead test models that were already available, we might have something that could really help hospitals right now…”

To solve this problem, the World Health Organization is considering an emergency data-sharing contract that would go into effect during international health crises. This would make it easier for researchers to move data across regions of the world.

Translated from Est-ce que les outils d’intelligence artificielle conçus pour lutter contre le Covid-19 sont efficaces ?