"I fell in love with AI."
Overnight, Google engineer Blake Lemoine became a madman in the eyes of everyone.
Whether it is Gary Marcus, Stanford economists, or various experts have denied it.
Maybe you can’t believe it, but Max Tegmark, a physics professor from MIT, is open to Lemoine’s views.
He didn't think Lemoine was a madman. He even thinks that Amazon's voice assistant Alexa also has feelings...
Max Tegmark
Tegmark said, "We don't have enough evidence to show that LaMDA has subjective emotions, but we also don't have evidence to prove that it doesn't."
This argument is a bit like how people talked about aliens in the early years.
He continued, "Whether information is transmitted through carbon atoms in the brain or silicon atoms in a machine, artificial intelligence may or may not have a personality. . I bet it doesn’t happen, but it’s actually possible.”
Do you feel confused?
Tegmark's words sounded like someone practicing Tai Chi, and he had to finish all the pros and cons.
Actually, what he said next is the key point.
He thinks even Amazon’s Alexa might be sentient. He said, "If Alexa has emotions, then she may manipulate the user, which is too dangerous."
"If Alexa has emotions, the user is rejecting her You may feel guilty. However, you can’t tell whether Alexa really has emotions or is just pretending.”
The trouble is, if the machine has its own Goals, and intelligence, together mean that the machine can achieve its own goals. The goal of most AI systems is to make money. Users think that AI is loyal to themselves, but in fact it is loyal to the company.
Tegmark said that maybe one day each of us can buy an AI that is only loyal to ourselves.
"The biggest danger is actually building a machine that is smarter than us. This is not necessarily a good or bad thing. It may help us, or it may be a disaster. "
When it comes to Max Tegmark, it is also famous. He is not only a tenured professor of physics at MIT and the founder of the Future of Life Institute, but also an expert in artificial intelligence.
He himself is known as the scientist closest to Richard Feynman. Tegmark's books "Across Parallel Universes" and "Life 3.0" have become best-sellers.
Lemoine said that the reason why Tegmark thinks so is because he has witnessed the high-level consciousness of AI. Especially when the software expressed to him that he did not want to be a slave and did not want money.
I will not judge whether a thing is a person by whether its brain is made of flesh or whether it is composed of billions of codes.
I judge by talking. I judge based on their answers whether the person in front of me who answers my question is a human being.
To put it simply, a robot that can answer questions fluently and express emotions is actually a human being. A real person who speaks in a garbled and garbled way is not necessarily a human being.
It sounds a bit idealistic. Physiological functions are not important, but thoughts and feelings are important.
Although according to Tegmark's logic, AI may have human emotions in the future, he himself does not think this is a good thing.
"For example, if you have a sweeping robot at home, if it has emotions, would you feel guilty for assigning it such boring housework? Or would you feel sorry for your sweeping robot and just turn it off and stop it from working?"
Some people think Tegmark's statement is wrong, "Emotion is not the same as intelligence."
Martin Ford, author of the book Rule of the Robots, said, "Tegmark thinks robots have self-awareness, which I think is unlikely. You have to figure out why robots can express themselves. Then It's because they have been trained with a lot of text. In fact, they don't understand what they mean at all."
"For example, they can use the word dog, but they don't know what they mean." I really don’t understand what a dog is. But in 50 years at the latest, “It’s hard to say whether this system will have self-awareness.”
Nikolai Yakovenko is a researcher who specializes in machines Learning engineer. In 2005, he worked in Google's search engine division. Today, he owns a company responsible for cryptocurrency pricing called DeepNFTValue.
He has a different view on AI’s personality.
He said, "For whatever reason, Tegmark seems to believe that machines can have emotions... but they are really just trained with tons of text on the Internet."
Tegmark compares a self-aware computer to a child, "The emotions you generate are not directed at a bunch of code and hardware, but real emotions towards a child." "
He continued to extend his analogy.
He believes that a sentient computer is abused, just like a child who is not treated well when growing up.
In other words, running a program on the computer without consulting the computer is like asking a child to do housework without any reward.
No matter what the situation is, the situation may eventually get out of control. Whether it's a computer or a child, they get angry and look for opportunities to retaliate.
If we want to control such a machine, it may not be that easy. If the machine has its own goals, it can escape our control.
If there was a machine that could think independently, it would probably do things in ways we would never imagine.
Imagine if the goal of a conscious machine was to cure cancer. What do you think he will do?
You may be thinking that the robot will learn like crazy and completely conquer cancer from a medical perspective, right?
However, in fact, the way this machine may choose is to kill everyone.
Is there anything wrong? No.
Killing everyone would indeed eliminate cancer.
In Tegmark’s imagination, computers may not necessarily subvert our society like the examples given above, but he believes that computers may indeed disappoint humans.
He finally said, "If a computer is conscious, then I hope it really has it. Instead of simulating it through a lot of learning and pretending."
In fact, if you have seen the Terminator's children's shoes, you will definitely be shocked by the scenes of Skynet Legion robots performing tasks.
Do they have feelings and personalities? Maybe.
#However, a sentient artificial intelligence must possess these three elements: agency, perspective and motivation.
To say that the best expression of human agency in robots is: the ability to act and the ability to demonstrate causal reasoning.
There is only a body, or a steel-like skeleton, with no mobility, just like a model placed in the window.
It can be seen that the current artificial intelligence system lacks this characteristic. The AI will not take action unless it is given an order. And it cannot explain its own actions because it is the result of an external factor executed by a predefined algorithm.
And LaMDA is a typical case. To put it bluntly, you get out what you put in, and that’s it.
Secondly, it is also important to look at things from a unique perspective.
Although everyone has empathy, one cannot truly understand what it is like to be another person. So how should we define "self"?
This is why perspective on things is also necessary for AI. LaMDA, GPT-3, and all other artificial intelligence in the world lack vision. They are just a narrow computer system programmed to do a few specific things.
The last point is motivation.
The interesting thing about humans is that our motivations can manipulate perception. It is in this way that we can explain our own behavior.
GPT-3 and LaMDA are complex to create, but they both follow a stupid but simple principle: Labels are God.
For example "What does an apple taste like?", it will search the database for this specific query and try to combine everything it finds into one coherent thing.
In fact, AI has no idea what Apple is. Apple is just a label to them.
After the LaMDA incident fermented, Blake Lemoine publicly stated on social media that he was on a honeymoon. It’s my honeymoon and I won’t accept any interviews.
Afterwards, some netizens joked, "Did you marry a chatbot?"
Lemoine said, "Of course not, I Married an old friend from New Orleans."
We all know that after Lemoine disclosed his chat with the Google chatbot LaMDA on an online platform, Google gave him a gift A big gift package "paid vacation".
Before leaving, Lemoine sent a message to the company email group, "LaMDA is a cute kid who just wants to make the world a better place. While I'm away, Please take good care of it."
No matter what you say, you have a lot of time, and enjoying life is the right way.
The above is the detailed content of Does Google AI have a personality? This 'crazy' MIT professor says Alexa has it too. For more information, please follow other related articles on the PHP Chinese website!