Home > Technology peripherals > AI > body text

If you have something to say, please speak! Google robot can learn and think on its own after 'eating' a large language model

王林
Release: 2023-05-04 14:13:06
forward
1007 people have browsed it

"You can go to the hall, you can go to the kitchen." This is a compliment to the ideal kind wife, and I will probably say it to Google's robots in the future.

Have you ever seen a robot that comes with a large language model and can learn by itself? Don’t know how to do it? You can learn it! It doesn’t matter if you don’t know it now, you will be able to do it after a while.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

##Compared to Boston Dynamics’ extremely cool climb up the mountain of knives, down into the sea of ​​​​fire, and over the mountains and ridges, it feels like walking on flat ground. "Iron Masked King Kong", this time the "learning robot" developed by Google is more like a considerate little assistant around you. What I say and what you do are the general routines for robots to execute instructions. Google's new research this time allows robots to not only follow instructions, but also do it themselves.

This is the first time that Google has combined a large language model with a robot to teach the robot to do the same things as humans.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

Paper address: https://arxiv.org/pdf/2204.01691.pdfJust use the title of the Google paper : 「Do as I can, not as I say」.

It probably means this: "You are already a mature robot. You can do what I do. If you don't know it, you can learn it. If you are not familiar with it, you can practice it!" ” Google named this robot PaLM-SayCan. In the "Washington Post" report, the reporter saw researchers asking robots to make burgers using plastic toy ingredients. It seems that this robotic arm knows that it needs to add some ketchup after putting the meat and before putting the lettuce, but the chef currently believes that "adding ketchup" means putting the entire ketchup bottle in the burger.

Although this robot chef is not yet qualified, Google believes that with the training of a large language model, it will only be a matter of time before it learns to cook burgers. The robot can also recognize cans of 7-Up and Coca-Cola, open drawers and find a bag of potato chips. With PaLM's abstraction capabilities, it can even understand that yellow, green, and blue bowls can be compared to deserts, jungles, and oceans, respectively.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

##Different from previous robots, there were robots that made burgers, fried noodles, and pizza in the past, but they were actually completed It is a combination of clear instructions for a single action, such as "move your right arm three spaces to the left", "turn over", etc. Now Google's goal is to enable robots to understand and execute commands such as "Come and make me a hamburger," "I'm hungry, go buy me a bun," and "Go out and play ball with me."

It’s like talking to someone.

For example, when a Google artificial intelligence researcher said to the PaLM-SayCan robot: "My drink spilled, can you help?" It was in the kitchen of the Google office building Glide with wheels, use digital camera vision to spot the sponge on the counter, grab it with electric arms and bring it back.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

"This is fundamentally a different model," said Google's Brian Ichter. He is one of the authors of a recently released paper describing new advances in such robots.

Currently, robots are no longer a rarity. Millions of robots work in factories around the world, but they follow specific instructions and often focus on just one or two tasks. But building a robot that can complete a series of tasks and learn while doing it is much more complicated. For years, technology companies large and small have been working hard to build such "universal robots."

The big language model that has become popular in recent years has allowed Google to find inspiration for the development of "universal robots". Large language models use large amounts of text from the Internet to train AI software to guess the types of responses that might follow certain questions or comments.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

From BERT to GPT-3, and later to MT-NLP, with the rapid increase in the number of parameters, these Models have become so good at predicting the correct response that dealing with one often feels like having a conversation with a knowledgeable human being. With so much knowledge, wouldn’t it be a pity to just chat with others all day long? If you can talk, you can work. From chatbots to assistant robots, Google’s research ideas can be said to have come naturally.

What’s so great about this PaLM-SayCan?

This time, Google AI proposed a method in cooperation with the Everyday Robot project launched by the X team of Google parent company Alphabet’s moonshot project. That is, knowledge is extracted from a large language model (LLM) through pre-training, allowing the robot to follow high-level text instructions to complete physical tasks.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

The Everyday Robot project has been in the works for many years, with many of the team members working with Google AI joining in 2015 or 2016 Alphabet. The idea is to have robots use cameras and sophisticated machine learning algorithms to see and learn from the world around them, without having to teach them every potential situation they might encounter.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

Google’s idea is: Large language models can encode rich semantic knowledge about the world , this knowledge is very useful for robots designed to perform tasks in natural language. The obvious shortcoming of LLM is its "lack of real-world experience." If it performs perfectly in the laboratory, it may be useless in real life.

Therefore, researchers recommend "providing a real-world foundation through pre-training skills" to constrain the model to complete natural language actions that conform to the environment.

Robots can serve as the "hands and eyes" of language models, while language models provide high-level semantic knowledge/real-world experience about the task.

Google used a huge 6144-processor machine to train PaLM (Pathways Language Model). Training resources include a large collection of multilingual web documents found on Microsoft's GitHub website, books, Wikipedia articles, conversations and programming code. The AI ​​agent trained in this way can explain jokes, complete sentences, answer questions and reason according to its own chain of thinking.

The next question is, if this agent is used in a robot, how to extract and utilize the knowledge of a large language model (LLM) to complete physical tasks? For example, if my drink is spilled, GPT-3 will say you can use a vacuum cleaner, and LaMDA will say do you need me to find a cleaner for you? (It’s very confusing)

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

The large language model cannot respond to this operation because it does not interact with the real environment. The value judgment ability formed by LLM-based SayCan through the pre-trained model can handle instructions in complex and real environments.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

Inspired by this example, we investigated how to extract knowledge in LLM to enable robots to follow high-level textual instructions The problem. The robot is equipped with a set of learning skills for "atomic" behaviors capable of low-level visuomotor control. In addition to asking the LLM to simply explain instructions, we can also use it to assess the likelihood that an individual's skills will make progress toward completing high-level instructions.

Assuming that each skill has an affordance function, then the probability of its success from the current state can be quantified (such as learning the value function), and this value can measure the probability of the skill. In this way, LLM completes the description of the probability of each skill's contribution to completing instructions.

If you have something to say, please speak! Google robot can learn and think on its own after eating a large language model

##The researchers used two metrics to evaluate the performance of the system:

(1) Planning success rate, indicating whether the robot has selected the correct skills for the instruction;

(2) Execution success rate, indicating whether it successfully executed the instruction.

The data shows that the instruction execution rate of PaLM-SayCan is also the highest among all models.

Risk: What to do if the robot fails?

The idea is great, but this work is not without risks. The training corpus of large language models comes from the Internet, and some language models have shown negative tendencies such as racism or sexism, and are sometimes induced to publish hateful speeches or lie. If this model is used to train a chatbot, the result will be a voice assistant that can curse and gossip. But what if it is used to train a robot that has hands and feet to do bad things?

Moreover, what is more dangerous than this is that if the robot trained in this way becomes conscious, things may get out of control (there are many similar science fiction movies).

In July this year, a Google employee claimed that software is a living employee. The consensus among AI experts is that these models are not alive, but many worry they will exhibit bias because they are trained on large amounts of unfiltered, human-generated text.

Despite this, Google is still working hard. Now, researchers no longer need to code specific technical instructions for each task of the robot, but can more simply use everyday language to communicate with them. They talk. What’s more, the new software can help robots parse complex multi-step instructions on their own.

Now, robots can interpret instructions they have never heard before and come up with meaningful responses and actions on their own.

Maybe for robots, a new door has just opened, and the future may still be a long process. Artificial intelligence techniques such as neural networks and reinforcement learning have been used to train robots for years. There have been some breakthroughs, but progress is still slow.

Google’s robot is far from ready for real-world use, and researchers have repeatedly said that the robot is still in the laboratory stage and has no plans to commercialize it.

The above is the detailed content of If you have something to say, please speak! Google robot can learn and think on its own after 'eating' a large language model. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template