iBorderCtrl : a dangerous misunderstanding of what AI really is

0
iBorderCtrl : a dangerous misunderstanding of what AI really is

iBorderCtrl is an AI based lie detector project funded by the European Union’s Horizon 2020. The tool will be used on people crossing borders of some European countries. It officially enables faster border control. It will be tested in Hungary, Greece and Letonia until August 2019 and should then be officially deployed.

Lire la suite sur: https://www.actuia.com/actualite/ibordercontrol-et-si-leurope-navait-pas-encore-compris-ce-quest-lintelligence-artificielle/

The project will analyze facial micro-expressions to detect lies. We really have worries about such a project. For those who don’t have any knowledge on AI and CS, the idea of using a computer to detect lies can sound really good. Computers are believed to be totally objective.

But the AI community knows it is far from being true: biases are nearly omnipresent. We have no idea how the dataset used by iBorderCtrl has been built.

More globally, we have to remind that AI has no understanding of humans (to be honest, it has no understanding at all). It just starts being able to recognize the words we pronounce, but it doesn’t understand their meaning.

Lies rely on complex psychological mechanisms. Detecting them would require a lot more than a simple literal understanding. Trying to detect them using some key facial expressions looks utopian, especially as facial expressions can vary from a culture to another one. As an example, nodding the head usually means “yes” in western world, but it means “no” in countries such as Greece, Bulgaria and Turkey.

AI is a great tool, but expecting it to have a better understanding of humans than humans themselves is fantasy often shared by the lay public. It has already led to derivative projects in the past, such as a “homosexuality detector”.

Doing research on lie detection is probably interesting, in a Lab. But we do believe it’s a bad idea to use it in real life. Nowadays, AI hardly succeeds to help us when we try to cooperate with it. But what if we try to mislead it? You just have to chat with Google Home or Alexa, which are among the best voice recognition systems ever built, for 1 minute to see their limits and be doubtful.

In conclusion, why wouldn’t we just use AI on tasks for which humans are not essentials, so it allows humans to focus on humans?

To read the original article in French :

iBorderCtrl : et si l’Europe n’avait pas encore compris ce qu’est l’intelligence artificielle ?