ThreeDWorld Transport Challenge: a challenge to test the interaction of robots in an environment

0
ThreeDWorld Transport Challenge: a challenge to test the interaction of robots in an environment

In a study, several researchers from IBM, the Massachusetts Institute of Technology (MIT) and Stanford University have teamed up to launch the “ThreeDWorld Transport Challenge”. Its objective is to evaluate the ability of artificial intelligence systems to find paths, interact with objects or plan tasks efficiently. To date, no AI model has been able to meet the challenge.

A challenge launched by a team of researchers

In the field of robotics, succeeding in developing a system that can physically sense the world and interact with its environment is often presented as one of the main challenges of artificial intelligence. Today, even if the achievements can be remarkable, they are still very far from human capabilities.

A team of researchers from MIT, IBM and Stanford have launched a challenge called the ThreeDWorld Transport Challenge. Collaborating scientists include Chunang Gan, Abhishek Bhandwaldar, Jeremy Schwartz, Seth Alter, Todd, Mummert, Josh McDermott, Daniel Yamins, James DiCarlo, Siyuan Zhou, Antonio Torrala, Joshua Tenenbaum and Dan Gutfreund.

The goal of the challenge is simple: if the artificial intelligence system passes all the tests, it will be considered highly evolved. It should be noted that no system has yet succeeded in completing this challenge. But then, why propose to AI systems, a challenge that seems unattainable? In reality, researchers are wondering about the limits of current models. The results of the competition may determine which research directions to focus on.

A virtual environment created especially for this challenge

Most robotics applications use reinforcement learning. The creation of this type of model presents several challenges:

  • One of them is to design one that takes into account several factors like gravity, wind, physical interactions with objects or other people. This is in contrast to environments such as chess where machines now win against humans.
  • Data collection is another major challenge: reinforcement learning systems need to train with a lot of data, even if it means simulating millions of interactions with their environment. This kind of process can slow down robotic systems, as they must collect their data from the constantly changing physical world.

In order to overcome these obstacles, developers have attempted to create simulated environments for reinforcement learning systems. Reproducing the exact dynamics of our environment is something difficult, but the challenge team managed to simulate a test environment that is as realistic as possible and that provides the context for the challenge to AI systems.

A complex test for reinforcement learning models

Reinforcement learning tests have different degrees of difficulty. In general, the most common ones involve the robot finding its way around a virtual environment. The ThreeDWorld Transport Challenge proposes something more complex by proposing “task and motion planning” (TAMP) problems, which require the robot to find optimal paths, but also to manipulate objects that would be in its path.

The virtual environment consists of a house with several rooms with furniture and objects. The robot adopts a first person point of view and has to find several objects in order to gather them in a specific place. The robot has two arms and can therefore only carry two objects at a time. Containers are present in the rooms, the machine can use them to carry several objects and thus, reduce the number of trips to be made.

The robot must perform these tasks in a minimum of steps, knowing that an action (turn, move forward, pick up an object, drop an object) corresponds to a step. One of the virtual robots tested in this challenge is called Magnebot. It has two arms with nine degrees of tilt, wrist, elbow and shoulder joints. However, its hands are magnets, as manipulation with the fingers is an extremely difficult task to achieve.

Encouraging results, but far from satisfactory

To avoid confusion, the challenge tasks are proposed to the robot in a simple coded language, including the name of the object and a number corresponding to the task it must perform with the object. To narrow the focus, the researchers limited the robot’s navigation to movements of 25 centimeters and rotations of 15 degrees. This allowed the developers to focus on the navigation and task planning problems that the robot must overcome.

According to the researchers’ previous experiments, only 10 percent of TDW tests are successful for the most successful reinforcement learning models. They also tried to develop hybrid models where the basic system was combined with a high-level planner and found a considerable improvement in system performance. However, the problem remains partly unsolved as the best robots tested have a success rate of about 50%.

Translated from ThreeDWorld Transport Challenge : un défi pour tester l’interaction des robots dans un environnement