The robot with a built-in large model has learned to follow language instructions to reach its destination without looking at a map. This achievement comes from the new work of reinforcement learning expert Sergey Levine.
Given a destination, how difficult is it to reach it smoothly without navigation tracks?
This task is also very challenging for humans with poor sense of direction. But in a recent study, several academics "taught" the robot using only three pre-trained models.
We all know that one of the core challenges of robot learning is to enable robots to perform a variety of tasks according to high-level human instructions. This requires robots that can understand human instructions and be equipped with a large number of different actions to carry out these instructions in the real world.
For instruction following tasks in navigation, previous work has mainly focused on learning from trajectories annotated with textual instructions. This may enable understanding of textual instructions, but the cost of data annotation has hindered widespread use of this technique. On the other hand, recent work has shown that self-supervised training of goal-conditioned policies can learn robust navigation. These methods are based on large, unlabeled datasets, with post hoc relabeling to train vision-based controllers. These methods are scalable, general, and robust, but often require the use of cumbersome location- or image-based target specification mechanisms.
In a latest paper, researchers from UC Berkeley, Google and other institutions aim to combine the advantages of these two methods to make a self-supervised system for robot navigation applicable to navigation data without any user annotations. , leveraging the ability of pre-trained models to execute natural language instructions. Researchers use these models to build an "interface" that communicates tasks to the robot. This system leverages the generalization capabilities of pre-trained language and vision-language models to enable robotic systems to accept complex high-level instructions.
The researchers observed that it is possible to leverage off-the-shelf pre-trained models trained on large corpora of visual and language datasets ( These corpora are widely available and show zero-shot generalization capabilities) to create interfaces that enable specific instruction tracking. To achieve this, the researchers combined the advantages of vision and language robot-agnostic pre-trained models as well as pre-trained navigation models. Specifically, they used a visual navigation model (VNM:ViNG) to create a robot's visual output into a topological "mental map" of the environment. Given a free-form text instruction, a pre-trained large language model (LLM: GPT-3) is used to decode the instruction into a series of text-form feature points. Then, a visual language model (VLM: CLIP) is used to establish these text feature points in the topological map by inferring the joint likelihood of feature points and nodes. A new search algorithm is then used to maximize the probabilistic objective function and find the robot's instruction path, which is then executed by the VNM. The main contribution of the research is the navigation method under large-scale models (LM Nav), a specific instruction tracking system. It combines three large independent pre-trained models - a self-supervised robot control model that leverages visual observations and physical actions (VNM), a visual language model that places images within text but without a concrete implementation environment (VLM), and a large language model that parses and translates text but has no visual basis or embodied sense (LLM) to enable long-view instruction tracking in complex real-world environments. For the first time, researchers instantiated the idea of combining pre-trained vision and language models with target-conditional controllers to derive actionable instruction paths in the target environment without any fine-tuning. Notably, all three models are trained on large-scale datasets, have self-supervised objective functions, and are used out-of-the-box without fine-tuning - training LM Nav does not require human annotation of robot navigation data.
Experiments show that LM Nav is able to successfully follow natural language instructions in a new environment while using fine-grained commands to remove path ambiguity during complex suburban navigation up to 100 meters.
So, how do researchers use pre-trained image and language models to provide text interfaces for visual navigation models?
1. Given a set of observations in the target environment, use the target conditional distance function, which is the visual navigation model (VNM) part, infer the connectivity between them, and build a topological map of the connectivity in the environment.
## 2. Large language model (LLM) is used to parse natural language instructions into a series of feature points, these Feature points can be used as intermediate sub-goals for navigation.
3. Visual-language model (VLM) is used to establish visual observations based on feature point phrases. The vision-language model infers a joint probability distribution over the feature point descriptions and images (forming the nodes in the graph above).
4. Using the probability distribution of VLM and the graph connectivity inferred by VNM, adopts a novel search algorithm , retrieve an optimal instruction path in the environment, which (i) satisfies the original instruction and (ii) is the shortest path in the graph that can achieve the goal.
5. Then, The instruction path is executed by the target condition policy, which is part of the VNM.
In independently evaluating the efficacy of VLM in retrieving feature points, the researchers found that although it is the best off-the-shelf model for this type of task, CLIP is unable to retrieve a small number of "hard" feature points, including Fire hydrants and cement mixers. But in many real-world situations, the robot can still successfully find a path to visit the remaining feature points.
Table 1 summarizes the quantitative performance of the system in 20 instructions. In 85% of the experiments, LM-Nav was able to consistently follow instructions without collisions or detachments (an average of one intervention every 6.4 kilometers of travel). Compared to the baseline without navigation model, LM-Nav consistently performs better in executing efficient, collision-free target paths. In all unsuccessful experiments, the failure can be attributed to insufficient capabilities in the planning phase—the inability of the search algorithm to intuitively locate certain “hard” feature points in the graph—resulting in incomplete execution of instructions. An investigation of these failure modes revealed that the most critical part of the system is the VLM's ability to detect unfamiliar feature points, such as fire hydrants, and scenes under challenging lighting conditions, such as underexposed images.
The above is the detailed content of Reinforcement learning guru Sergey Levine's new work: Three large models teach robots to recognize their way. For more information, please follow other related articles on the PHP Chinese website!