Home > Technology peripherals > AI > body text

Marcus confronts Musk: You still want to make an all-purpose home robot, that's stupid!

王林
Release: 2023-04-12 08:25:02
forward
1154 people have browsed it

Two days ago, Google launched a new research on robots-PaLM-SayCan.

To put it simply, "You are already a mature robot and can learn to serve me by yourself."

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

But Marcus doesn’t think so.

I understand, you want to be a "Terminator"

Judging from the performance, Google's new robot PaLM-SayCan is indeed very cool.

When humans say something, robots listen and act immediately without hesitation.

This robot is quite "sensible". You don't have to say "bring me pretzels from the kitchen", just say "I'm hungry", and it will He will walk to the table by himself and bring you snacks.

No need for extra nonsense, no need for more details. Marcus admits: It’s really the closest thing to Rosie the Robot I’ve ever seen.

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

It can be seen from this project that Alphabet’s two historically independent departments, Everyday Robots and Google Brain, have invested a lot of energy. Chelsea Finn and Sergey Levine who participated in the project are both academic experts.

Obviously, Google invested a lot of resources (such as a large number of pre-trained language models and humanoid robots and a lot of cloud computing) to create such an awesome robot.

Marcus said: I'm not surprised at all that they can build robots so well, I'm just a little worried - should we do this?

Marcus believes that there are two problems.

「Bull in the china shop」

First of all, as we all know, the language technology on which this new system relies has its own problems; secondly, in the robot In context, the problem may be even greater.

Putting robots aside, we already know that so-called large language models are like bulls—powerful, powerful, and reckless enough to rampage through a china shop. They can aim directly at a target in one moment and then veer off into unknown danger. A particularly vivid example comes from the French company Nabla, which explores the utility of GPT-3 as a medical advisor:

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

There are countless questions like this Lift.

DeepMind, another subsidiary of Alphabet, raised 21 social and ethical issues for large language models, covering topics such as fairness, data leakage, and information.

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

##Paper address: https://arxiv.org/pdf/2112.04359.pdf

But they failed to mention this: robots embedded in certain models may kill your pets or destroy your house.

We should really pay attention to this. The PaLM-SayCam experiment clearly shows that 21 questions need to be updated.

For example, large language models may suggest that people commit suicide, or sign a genocide pact, or they may be poisonous.

And they are very (over)sensitive to the details of the training set - when you put these training sets into the robot, if they misunderstand you, or do not fully understand what you are asking Meaning, they could get you into big trouble.

To their credit, the staff at PaLM-SayCan at least thought of preventing this from happening.

For every request that comes from the robot, they will conduct a feasibility check: the language model infers whether what the user wants to do can really be completed.

But is this foolproof? If the user asks the system to put the cat in the dishwasher, this is indeed possible, but is it safe? Is it ethical?

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

Similar problems can occur if the system misunderstands humans.

For example, if people say "put it in the dishwasher" and the large language model treats the referent of "it" as the word cat, the user refers to Something else.

What we learn from all the research on large language models is that they are simply not reliable enough to give us a 100% clear understanding of the user’s intent. Misunderstandings are inevitable. Some of these misconceptions can lead to disaster if these systems are not subject to truly rigorous examination.

Maayan Harel drew this great illustration for Rebooting AI, where the robot is told to put away everything in the living room:

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

Do you still remember the story in the third part of the original book "Love, Death, Robot" where the owner is asked to throw the cat towards the crazy sweeping robot?

The gap with the real world

The reality is that there is currently no feasible way to solve the many "alignment" problems that plague large language models.

As Marcus mentioned before: large language models are superficial statistical simulations rather than models that convey rich knowledge of the world around them. Building a robot on a language system that knows little about the world is unlikely to succeed.

And that’s exactly what Google’s new system is doing: stitching together a shallow, hopeless language understander with a powerful and potentially dangerous humanoid robot. As the saying goes, garbage in, garbage out.

Keep in mind that there is often a huge gulf between presentation and reality.

Self-driving car demonstrations have been around for decades, but getting them to work reliably has proven to be much harder than we thought.

Google co-founder Sergey Brin promised in 2012 that we would have self-driving cars by 2017; now in 2022, they are still in limited experiments testing phase.

Marcus warned in 2016 that the core problem is edge cases:

In common situations, driverless cars are Great performance. If you put them out on a sunny day in Palo Alto, they're great. If you put them somewhere with snow or rain, or somewhere they haven't seen before, it will be difficult for them. Steven Levy has a great article about Google's autonomous car factory, and he talks about the big win at the end of 2015 being that they finally got these systems to recognize leaves. The system can recognize leaves, which is great, but there are less common things that don't have as much data.

This remains the core issue. Only in recent years has the self-driving car industry woke up to this reality. As Waymo AI/ML engineer Warren Craddock recently said in a thread that should be read in its entirety:

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

The truth is: there are countless edge cases. There are countless different Halloween costumes. The speed of running a red light is continuous. Unable to enumerate edge cases. Even if edge cases could be enumerated, it would not help!

And, most importantly -

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

When you understand that edge cases are also infinite in nature, you can see See how complex the problem is. The nature of deep networks—their underlying mechanics—means edge cases can easily be forgotten. You can't experience an edge case once and then have it go away.

There’s no reason to think that bots, or bots with natural language interfaces like Google’s new system, are exempt from these problems.

Interpretability Issue

Another issue is interpretability.

Google has put a lot of effort into making the system interpretable to some extent, but has not found an obvious way to combine large language models with the kind that (on microprocessors, USB driver and formal verification methods commonly used in large aircraft designs).

Yes, using GPT-3 or PaLM to write surreal prose does not require verification; you can also trick Google engineers into believing that your software is sentient without ensuring that the system is coherent or correct.

But humanoid home robots that handle a variety of household chores (not just Roomba vacuum cleaners) will need to do more than just socialize with their users, they will need to do so reliably and safely User request. Without a greater degree of explainability and verifiability, it’s hard to see how we can achieve this level of security.

The "more data" that the driverless car industry has been betting on is not so likely to succeed. Marcus said this in that 2016 interview, and he still thinks so today—big data is not enough to solve the robot problem:

If you want a robot in your home— —I still fantasize about Rosie the robot doing my chores—you can’t make mistakes. [Reinforcement learning] is very much trial and error on a large scale. If you have a robot at home, you can't have it crash into your furniture multiple times. You don't want it to put your cat through the dishwasher once. You don't get the same scale of data. For robots in real-world environments, what we need is for them to learn quickly from a small amount of data.

Google and EveryDay Robots later found out about all this and even made a hilarious video admitting it.

But this does not stop some media from getting carried away. Written this article with an exaggerated title to embellish the research situation and make it sound as if a key problem had been solved.

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

Google’s new robot learns to listen to commands through “web crawling”

This reminds me of There were two articles in magazine in 2015, both with the same optimistic titles and projects that never came to fruition.

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

Facebook announced the launch of Project M to challenge Siri and Cortana

And this article——

Marcus confronts Musk: You still want to make an all-purpose home robot, thats stupid!

Deep learning will soon allow us to have super-intelligent robots

We all know the subsequent story: Facebook M was aborted. In the past 7 years, there has been no People can buy super-intelligent robots at any price.

Whoever believes it is a fool

Of course, Google’s new robot did learn to accept some orders by “scraping the web”, but robotics technology It's in the details.

In an ideal situation, when the robot has a limited number of options to choose from, the performance is probably around 75%. The more actions a robot has available to choose from, the worse its performance may be.

Palm-SayCan The robot only needs to process 6 verbs; humans have thousands of verbs. If you read Google's report carefully, you will find that on some operations, the system's correct execution rate is 0%.

For a general humanoid home robot, 75% is far from enough. Imagine a home robot was asked to lift grandpa to bed, but only succeeded three out of four times.

Yes, Google did a cool demo. But it’s still far from a real-world product.

PaLM-SayCan offers a vision of a future where, like Jetsons, we can talk to robots and let them help with everyday chores. This is a beautiful vision.

But Marcus said: But if any of us - including Musk - are "holding our breath" and expect such a system to be implemented in the next few years, then he is a fool. .

The above is the detailed content of Marcus confronts Musk: You still want to make an all-purpose home robot, that's stupid!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!