Home > Technology peripherals > AI > The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

WBOY
Release: 2024-01-09 19:30:25
forward
696 people have browsed it

These days, all housework is being done by robots.

The robot from Stanford that can use pots has just appeared, and the robot that can use coffee machines has just arrived, Figure-01.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

#Figure-01 Just watch the demonstration video and conduct 10 hours of training to be able to operate the coffee machine proficiently. From inserting the coffee capsule to pressing the start button, it’s all done in one go.

However, it is a difficult problem to enable robots to independently learn to use various furniture and home appliances without the need for demonstration videos when encountering them. This requires the robot to have strong visual perception and decision-making planning capabilities, as well as precise manipulation skills.

Now, a three-dimensional embodied graphic and text large model system provides new ideas for the above problems. The system combines a precise geometric perception model based on three-dimensional vision with a two-dimensional graphic and text large model that is good at planning. It can solve complex long-term tasks related to furniture and home appliances without the need for sample data.

This research was completed by the team of Professor Leonidas Guibas of Stanford University, Professor Wang He of Peking University, and the Zhiyuan Artificial Intelligence Research Institute.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Paper link: https://arxiv.org/abs/2312.01307

Project homepage: https://geometry.stanford.edu/projects/ sage/

Code: https://github.com/geng-haoran/SAGE

Overview of Research Problem

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 1: According to human instructions, the robotic arm can use various household appliances without any instruction.

Recently, PaLM-E and GPT-4V have promoted the application of large graphic models in robot task planning, and generalized robot control guided by visual language has become a popular research field.

The common method in the past was to build a two-layer system. The upper-layer large graphic model does planning and skill scheduling, and the lower-layer control skill strategy model is responsible for physically executing actions. But when robots face a variety of household appliances that they have never seen before and require multi-step operations in housework, both the upper and lower layers of the existing methods will be helpless.

Take the most advanced graphic model GPT-4V as an example. Although it can describe a single picture with text, when it comes to the detection, counting, positioning and status estimation of operable parts, it still has full of mistakes. The red highlights in Figure 2 are the various errors GPT-4V made when describing pictures of chests of drawers, ovens, and standing cabinets. Based on the wrong description, the robot's skill scheduling is obviously unreliable.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 2: GPT-4V does not handle counting, detection,## very well #Positioning, state estimation and other tasks focused on generalized control.

The lower-level control skill strategy model is responsible for executing the tasks given by the upper-level graphic and text model in various actual situations. Most of the existing research results rigidly encode the grasping points and operation methods of some known objects based on rules, and cannot generally deal with new object categories that have not been seen before. However, end-to-end operation models (such as RT-1, RT-2, etc.) only use RGB modality, lack accurate perception of distance, and have poor generalization to changes in new environments such as height.

Inspired by Professor Wang He’s team’s previous CVPR Highlight work GAPartNet [1], the research team focused on common parts (GAParts) in various categories of household appliances. Although household appliances are ever-changing, there are always a few parts that are indispensable. There are similar geometries and interaction patterns between each household appliance and these common parts.

As a result, the research team introduced the concept of GAPart in the paper GAPartNet [1]. GAPart refers to a generalizable and interactive component. GAPart appears on different categories of hinged objects. For example, hinged doors can be found in safes, wardrobes, and refrigerators. As shown in Figure 3, GAPartNet [1] annotates the semantics and pose of GAPart on various types of objects.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 3: GAPart: a generalizable and interactive component [1].

Based on previous research, the research team creatively introduced GAPart based on three-dimensional vision into the robot's object manipulation system SAGE. SAGE will provide information for VLM and LLM through generalizable 3D part detection and accurate pose estimation. At the decision-making level, the new method solves the problem of insufficient precise calculation and reasoning capabilities of the two-dimensional graphic model; at the execution level, the new method achieves generalized operations on each part through a robust physical operation API based on GAPart poses.

SAGE constitutes the first three-dimensional embodied graphic and text large-scale model system, providing new ideas for the entire link of robots from perception, physical interaction to feedback, and enabling robots to intelligently and universally control furniture and home appliances, etc. Complex objects explore a possible path.

System Introduction

Figure 4 shows the basic process of SAGE. First, an instruction interpretation module capable of interpreting context will parse the instructions input to the robot and its observations, and convert these parses into the next robot action program and its related semantic parts. Next, SAGE maps the semantic part (such as the container) to the part that needs to be operated (such as the slider button), and generates actions (such as the "press" action of the button) to complete the task.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 4: Overview of the method.

In order to facilitate everyone's understanding of the entire system process, let's take a look at an example of using a robotic arm to operate an unseen microwave oven without a sample.

Command analysis: from visual and command input to executable skill commands

After inputting instructions and RGBD image observations, the interpreter first generates a scene description using VLM and GAPartNet [1]. Subsequently, LLM (GPT-4) takes instructions and scene descriptions as input to generate semantic parts and action programs. Alternatively, you can enter a specific user manual in this link. LLM will generate an operable part target based on the input.
The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models
Figure 5: Generation of scene description (taking zero-shot use of microwave oven as an example).

In order to better assist action generation, the scene description contains object information, part information and some interaction-related information. Before generating the scenario description, SAGE will also employ the expert GAPart model [1] to generate expert descriptions for VLM as prompts. This approach, which combines the best of both models, works well.
The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models
Figure 6: Instruction understanding and movement planning (taking zero-shot use of a microwave oven as an example).

Understanding and perception of part interaction information
The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models
Figure 7: Parts understanding.

In the process of inputting observations, SAGE combines two-dimensional (2D) cues from GroundedSAM and three-dimensional (3D) cues from GAPartNet , these cues are then used as specific positioning of operable parts. The research team used ScoreNet, non-maximum suppression (NMS) and PoseNet to demonstrate the perception results of the new method.

Among them: (1) For the part-aware evaluation benchmark, the article directly uses SAM [2]. However, in the operational flow, the article uses GroundedSAM, which also takes into account semantic parts as input. (2) If the large language model (LLM) directly outputs a target of an operable part, the positioning process will be bypassed.
The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models
Figure 8: Parts understanding (taking zero-shot microwave oven as an example).

Action generation

Once the semantic part is positioned On top of the operable part, SAGE will generate executable operating actions on this part. First, SAGE estimates the pose of the part, calculating the articulation state (part axis and position) and possible directions of motion based on the articulation type (translation or rotation). It then generates movements for the robot to operate the part based on these estimates.

In the task of starting the microwave oven, SAGE first predicted that the robotic arm should take an initial gripper posture as the main action. Actions are then generated based on the predetermined strategy defined in GAPartNet [1]. This strategy is determined based on the part pose and articulation status. For example, to open a door with a rotating hinge, the starting position can be on the edge of the door or on the handle, with the trajectory being an arc oriented along the door hinge.

Interactive feedback

So far, the research team has only used one initial observation to generate open-loop interactions. At this point, they introduced a mechanism to further exploit the observations obtained during the interaction, updating the perceived results and adjusting operations accordingly. To achieve this goal, the research team introduced a two-part feedback mechanism to the interaction process.

It should be noted that occlusion and estimation errors may occur during the perception process of the first observation.
The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models
Figure 9: The door cannot be opened directly, and this round of interaction fails (take zero-shot using a microwave oven as an example).

In order to solve these problems, researchers further proposed a model that uses interactive observation (Interactive Perception) to enhance operations. Tracking of target gripper and part status is maintained throughout the interaction.If significant deviations occur, the planner can choose one of four states: "Continue", "Move to next step", "Stop and replan" or "Success".

For example, if you set the gripper to rotate 60 degrees along a joint, but the door is only open 15 degrees, the Large Language Model (LLM) planner will select Stop and re-plan.” This interactive tracking model ensures that LLM can analyze specific problems during the interaction process and can "stand up" again after the setback of microwave oven startup failure.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 10: Through interactive feedback and re-planning, the robot realizes the method of button opening and succeeds.

Experimental results

The research team first built a test benchmark for large-scale language-guided articulated object interaction.
The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models
Figure 11: SAPIEN simulation experiment.

They used the SAPIEN environment [4] to conduct simulation experiments and designed 12 language-guided articulated object manipulation tasks. For each category of microwave ovens, storage furniture, and cabinets, 3 tasks were designed, including open and closed states in different initial states. Other tasks are "Open the lid of the pot", "Press the button on the remote control" and "Start the blender". Experimental results show that SAGE performs well in almost all tasks.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 12: Real machine demonstration.

The research team also conducted large-scale real-world experiments using UFACTORY xArm 6 and a variety of different articulated objects. The upper left part of the image above shows an example of starting a blender. The top of the blender is perceived as a container for juice, but its actual function requires the push of a button to activate. SAGE's framework effectively bridges its semantic and action understanding and successfully performs the task.

The upper right part of the picture above shows the robot, which needs to press (down) the emergency stop button to stop operation and rotate (up) to restart. A robotic arm guided by SAGE accomplished both tasks with auxiliary input from a user manual. The image at the bottom of the image above shows more detail in the task of turning on a microwave.

The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models

Figure 13: More examples of real machine demonstration and command interpretation.

Summary

SAGE is the first three-dimensional visual language model framework that can generate general manipulation instructions for complex articulated objects such as furniture and home appliances. It converts language-instructed actions into executable manipulations by connecting object semantics and operability understanding at the part level.

In addition, the article also studies the method of combining general large-scale visual/language models with domain expert models to enhance the comprehensiveness and correctness of network predictions, and more Handle these tasks well and achieve state-of-the-art performance. Experimental results show that the framework has strong generalization capabilities and can demonstrate superior performance on different object categories and tasks. In addition, the article provides a new benchmark for language-guided manipulation of articulated objects.

Team Introduction

SAGE This research result comes from the laboratory of Professor Leonidas Guibas of Stanford University, the Embodied Perception and Interaction (EPIC Lab) of Professor Wang He of Peking University, and the Intelligent Intelligence Laboratory. Source Artificial Intelligence Research Institute. The authors of the paper are Peking University student and Stanford University visiting scholar Geng Haoran (co-author), Peking University doctoral student Wei Songlin (co-author), Stanford University doctoral students Deng Congyue and Shen Bokui, and the supervisors are Professor Leonidas Guibas and Professor Wang He .

References:

[1] Haoran Geng, Helin Xu, Chengyang Zhao, Chao Xu, Li Yi , Siyuan Huang, and He Wang. Gapartnet: Cross-category domaingeneralizable object perception and manipulation via generalizable and actionable parts. arXiv preprint arXiv:2211.05272, 2022.

[2] Kirillov, Alexander, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao et al. "Segment anything." arXiv preprint arXiv:2304.02643 (2023).

[3] Zhang, Hao, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, and Heung-Yeung Shum. "Dino: Detr with improved denoising anchor boxes for end-to-end object detection." arXiv preprint arXiv:2203.03605 (2022).

[4] Xiang , Fanbo, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu et al. "Sapien: A simulated part-based interactive environment." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11097-11107.2020.

The above is the detailed content of The first universal 3D graphics and text model system for furniture and home appliances that requires no guidance and can be used in generalized visualization models. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template