Meta reveals how it intends to build the metaverse thanks to Artificial Intelligence

Meta reveals how it intends to build the metaverse thanks to Artificial Intelligence

On January 20, Meta presented “data2vec”, the first high-performance self-supervised algorithm for speech, vision and text. Four days later, Mark Zuckerberg announced the construction of the AI Research SuperCluster(RSC), one of the fastest AI supercomputers today and expected to be the fastest by mid-2022, once finalized. On February 23, the group hosted a virtual Meta AI: “Inside the Lab” event on the role of AI in building the metaverse and presented some of its work.

Facebook began to take an interest in Artificial Intelligence as early as 2013 and recruited Yann Le Cun, 2018 Turing Award winner, to set up the FAIR (Facebook Artificial Intelligence Research) lab, who had given us an interview in the first issue of the Artificial Intelligence magazine, ActuIA. Since then, the group has not stopped investing in the field of AI. We know Marc Zuckerberg’s infatuation for the metaverse which is for him the future of the Internet and decided to change the name of his group to META. The group has created a special division called Reality Labs and invested more than 10 billion in 2021 for research in this field. The latest advances were presented at “Inside the Lab”. Mark Zuckerberg stated there:

“As we develop the metaverse, we will need AI to do the heavy lifting that makes next-generation virtual experiences possible.”

Builder Bot

Mark Zuckerberg introduced a new AI tool powering creativity in the metaverse: Builder Bot, which allows you to generate or import things into a virtual world through voice commands. His avatar walked through 3D landscapes after the commands ” let’s go to a park” and ” let’s go to the beach”. The landscape transformed when he asked Builder Bot to add clouds, an island, grow trees… the AI executed and added the elements to the virtual landscape.

No Language Left Behind and Universal Speech Translator, to eliminate language barriers

AI machine translation systems do not cover thousands of languages, so more than 20% of the world’s population cannot use them from their native language. The scarcity of data for these languages is an obstacle, as learning is generally done from millions of sentences. The challenge for direct oral translation is even greater.

Meta is addressing these challenges by developing new machine learning techniques for two specific projects. The first, No Language Left Behind, focuses on creating AI models that can learn to translate language using fewer training examples. The second ,“Universal Speech Translator ,” will aim to build systems that directly translate speech in real time from one language to another without the need for a standard writing system as well as those that are both written and spoken. The company states:

“If No Language Left Behind and Universal Speech Translator, combined with the efforts of the machine translation research community, succeed in creating translation technologies that include everyone, it will open up the digital and physical worlds in ways previously impossible. We are already making progress in enabling translations for low-resource languages, a significant barrier to universal translation for most of the world’s population. By advancing and opening up our work on corpus creation, multilingual modeling, and evaluation, we hope that other researchers can build on this work and bring real-world uses of translation systems closer to reality.

Project CAIRaoke for smoother conversation with virtual assistants

CAIRaoke is a new approach to AI that powers chatbots and assistants.

Virtual assistants rely on natural language understanding (NLU), dialogue state tracking (DST), dialogue policy management (DP) and natural language generation (NLG). These separate AI systems must then be linked together, it is very complicated to improve their architecture and optimize them, especially to adapt them to new unknown tasks. However, Google AI proposed in 2020 Meena, a conversational agent with 2.6 billion parameters. For its part, to improve conversational AI, Meta has developed CAIRaoke, another end-to-end neural model and is already using it for Portal, one of its products that makes it easier to create and manage reminders. The company states:

“we aim to integrate it with augmented and virtual reality devices to enable immersive, multimodal interactions with assistants in the future.

Meta also announced the creation of the “Artificial Intelligence Learning Alliance (AILA), an initiative to enhance diversity and increase equity in the field of artificial intelligence, and TorchRec, a PyTorch domain library for recommender systems.

Translated from Meta dévoile comment elle entend construire le métavers grâce à l’Intelligence Artificielle

Recevoir une notification en cas d'actualité importante    OK Non merci