Recently, a lot of progress has been made in the direction of embodied intelligence. From Google’s RT-H to Figure 01 jointly created by OpenAI and Figure, robots are becoming more interactive and versatile.
If robots become assistants in people’s daily lives in the future, what tasks do you expect them to be able to complete? Make a steaming cup of hand-brewed coffee, tidy up the desktop, and even help you arrange a romantic date. Tsinghua's new embodied intelligence framework "CoPa" can complete these tasks with just one command.
CoPa (Robotic Manipulation through Spatial Constraints of Parts) is the latest intelligent framework proposed by the Tsinghua University robotics research team under the leadership of Professor Gao Yang. This framework realizes for the first time the robot's generalization ability when facing long-distance tasks and complex 3D behaviors in a variety of scenarios.
Paper address: https://arxiv.org/abs/2403.08248
Project home page: https://copa-2024.github.io/
Thanks to the unique application of visual language large models (VLMs), CoPa can be used in the open without any specific training. It can be generalized in various scenarios and can handle complex instructions. What is most striking about CoPa is its ability to demonstrate a deep understanding of the physical properties of objects in the scene, as well as its precise planning and manipulation capabilities.
For example, CoPa can help researchers make a cup of hand-brewed coffee:
In this task, CoPa can not only understand each object in a complex table display function, and their physical operations can also be completed through precise control. For example, in the task of "pour water from the kettle into the funnel", the robot moves the kettle over the funnel and accurately rotates it to the appropriate angle so that the water can flow from the mouth of the kettle into the funnel.
CoPa can also carefully arrange a romantic date. After understanding the researchers’ dating needs, CoPa helped them set up a beautiful Western dining table.
#While deeply understanding user needs, CoPa also demonstrates the ability to accurately manipulate objects. For example, in the task of "inserting a flower into a vase", the robot first grabs the stem of the flower, rotates it until it faces the vase, and finally inserts it.
Method introduction
Algorithm process
Most operating tasks can be decomposed There are two stages: the grasping of the object, and the subsequent actions required to complete the task. For example, when opening a drawer, we need to grasp the handle of the drawer first, and then pull the drawer out along a straight line. Based on this, the researchers designed two stages, that is, first through the "Task-Oriented Grasping module (Task-Oriented Grasping)" to generate the pose of the robot grasping the object, and then through the "Task-related motion planning module (Task-Aware)" Motion Planning)" generates the pose required to complete the task after grabbing. The robot's transfer between adjacent poses can be achieved through traditional path planning algorithms.
Important Part Detection Module
Researchers observed that most operational tasks require detailed "part-level understanding" of objects in the scene. For example, when cutting something with a knife, we hold the handle instead of the blade; when wearing glasses, we hold the frame instead of the lenses. Based on this observation, the research team designed a "coarse-to-fine part grounding module" to locate task-related parts of the scene. Specifically, CoPa first locates task-relevant objects in the scene through coarse-grained object detection, and then locates task-relevant parts of these objects through fine-grained part detection.
In the "task-oriented grabbing module", CoPa first locates the grabbing position (such as the handle of the tool) through the important part detection module, and the position information is Used to filter the grasping poses generated by GraspNet (a model that can generate all possible grasping poses in the scene) to obtain the final grasping pose.
Task-related motion planning module
In order to allow a large visual language model to help the robot perform operating tasks, this research needs to design an interface that can both The model reasons in a language and is conducive to robot operation. The research team found that during the execution of tasks, task-related objects are usually subject to many spatial geometric constraints. For example, when charging a mobile phone, the charging head must be facing the charging port; when capping a bottle, the cap must be placed squarely on the mouth of the bottle. Based on this, the research team proposed using spatial constraints as a bridge between visual language large models and robots. Specifically, CoPa first uses a large visual language model to generate the spatial constraints that task-related objects need to meet when completing the task, and then uses a solving module to solve the robot's pose based on these constraints.
Experimental results
CoPa Capability Assessment
CoPa real-world operational tasks Demonstrated strong generalization ability. CoPa has a deep understanding of the physical properties of objects in the scene, thanks to the utilization of common-sense knowledge embedded in large models of the visual language.
For example, in the "Hammer a Nail" task, CoPa first grabbed the handle of the hammer, then rotated the hammer until the hammer head was facing the nail, and finally hammered downwards. The task required precise identification of the hammer handle, hammer face, and nail face, and a full understanding of their spatial relationships, demonstrating CoPa's in-depth understanding of the physical properties of objects in the scene.
In the task of "putting the eraser into the drawer", CoPa first located the eraser, then found that part of the eraser was wrapped in paper, so it cleverly grabbed it This part, make sure the eraser doesn't get stained.
In the task of "inserting the spoon into the cup", CoPa first grabbed the handle of the spoon, translated and rotated it until it was facing vertically downward and facing the cup, and finally Inserting it into a cup demonstrates that CoPa has a good understanding of the spatial geometric constraints that an object needs to meet to complete its task.
The research team conducted sufficient quantitative experiments on 10 real-world tasks. As shown in Table 1, CoPa significantly outperforms baseline methods as well as many ablation variants on these complex tasks.
Ablation Experiment
The researchers proved the importance of the following three components in the CoPa framework through a series of ablation experiments: Basic model, coarse-to-fine part detection, spatial constraint generation. The experimental results are shown in Table 1 above.
Basic model
The CoPa w/o foundation ablation experiment in the table removes the use of the basic model in CoPa and instead uses the detection model to locate objects, and a rule-based approach to generate spatial constraints. The experimental results show that the success rate of this ablation variant is very low, proving the important role of the rich common sense knowledge contained in the basic model in CoPa. For example, in the "Sweeping Nuts" task, the ablation variant does not know which tool in the scene is suitable for sweeping.
Detection of parts from coarse to fine
In the table, CoPa w/o coarse-to-fine ablation experiment removes CoPa from coarse to fine Fine part detection design, instead directly using fine-grained segmentation to localize objects. This variant significantly degrades performance on the relatively difficult task of locating important parts of an object. For example, in the "Hammer a Nail" task, the lack of a "coarse to fine" design makes it difficult to identify the hammer surface.
Spatial constraint generation
The CoPa w/o constraint ablation experiment in the table removes the spatial constraint generation module of CoPa and instead allows visual language The large model directly outputs the specific numerical value of the robot's target pose. Experiments show that it is very difficult to directly output the robot target pose based on scene pictures. For example, in the "pour water" task, the kettle needs to be tilted at a certain angle, and this variant is completely unable to generate the robot's posture at this time.
For more information, please refer to the original paper.
The above is the detailed content of With just one command, you can make coffee, pour red wine, and hammer nails. Tsinghua's embodied smart CoPa is now available.. For more information, please follow other related articles on the PHP Chinese website!