Google I/O: announcement of the LaMDA chatbot, capable of conversing naturally with a human

0
Google I/O: announcement of the LaMDA chatbot, capable of conversing naturally with a human

After announcing its new TPUs its MUM artificial intelligence technology, Google is introducing its Language Model for Dialogue Applications (LaMDA) at the annual Google I/O conference. This is the brand’s latest breakthrough in the field of natural language understanding. The main objective of this chatbot: to be able to converse in a fluid and natural way with a human.

A model trained to interact with a human

Like BERT, GPT-3 or MUM, LaMDA is based on Transformer, a recurrent neural network architecture used in language processing. But unlike most other language models, this one is designed for dialogue. In its training, it has retained many of the nuances that can exist in a conversation, such as sensitivity.

For example, if a person says, “I just started guitar lessons,” the other person might say, “How exciting! My mother also has a guitar and loves to play it. This answer makes sense if you take into account the first statement and could very well be a classic dialogue between two individuals. The developers have made every effort to ensure that the answers provided by the system make sense and are similar to a normal conversation between two people.

This was made possible by research conducted last year. It showed that dialogue-trained models based on the Transformer architecture could learn to talk about any topic imaginable. TheMDA was designed with this in mind.

Capturing the nuances of a conversation and responding accordingly

During a conversation between two people, a topic may be brought up early on and then dropped as the two interlocutors gradually bounced to another topic. In most cases, chatbots or virtual conversation agents are lost as they get locked into thematic tunnels predefined by scripts.

LaMDA aims to offer answers in a fluid way on a supposedly infinite number of topics. This technology could allow humans to converse with our devices in a natural, non-mechanical way. The image below shows the set of possibilities and induced paths of a replica proposed by the user. The model chooses the one that best fits the situation:

LaMDA technologie intelligence artificielle traitement langage dialogue chatbot

The risks of creating technologies such as LaMDA

One of the questions raised by Google about the development of LaMDA is whether this technology respects the firm’s principles around AI. In its remarks, the brand refers to this questioning as its priority:

“Language may be one of humanity’s greatest tools, but like all tools, it can be misused. Models trained in language can propagate this misuse, for example, by referring to prejudices, reflecting hate speech or evoking false information. And even when the language it is trained on is carefully vetted, the model itself can still be misused. Our top priority when creating technologies like LaMDA is to ensure that we minimize these risks.”

A useful precaution, since many chatbot experiments have gone awry, starting with Tay, a chatbot launched by Microsoft in 2016 that relayed Nazi slurs. (It’s worth remembering that despite advances in NLP, a chatbot is not “aware” of what it is saying, let alone has “awareness” at all. However, such a slip can have serious consequences in terms of communication and presents a real ethical responsibility towards people who might feel offended).

The group also claims to offer its resources as open source so that researchers and developers can analyze the models and examine the data on which they are built.

Translated from Google I/O : annonce du chatbot LaMDA, capable de dialoguer naturellement avec un humain