


The future of artificial intelligence: general artificial intelligence
To gain a true understanding of artificial intelligence, researchers should turn their attention to developing a basic, potential AGI technology that can replicate human responses to the environment. understand.
Industry giants like Google, Microsoft, and Facebook, research labs like Elon Musk’s OpenAI, and even platforms like SingularityNET are all betting on artificial general intelligence (AGI)—intelligent agents that understand or learn anything that humans cannot do. task capabilities, which represents the future of artificial intelligence technology.
Somewhat surprisingly, however, none of these companies are focused on developing a basic, underlying AGI technology that replicates human contextual understanding. This may explain why the research being conducted by these companies relies entirely on intelligent models that have varying degrees of specificity and rely on today's artificial intelligence algorithms.
Unfortunately, this reliance means that AI can only exhibit intelligence at best. No matter how impressive their abilities are, they still follow a predetermined script that includes many variables. Therefore, even large, highly complex programs such as GPT3 or Watson can only demonstrate comprehension. In fact, they do not understand that words and images represent physical things that exist and interact with each other in the physical universe. The concept of time or the idea of cause having an effect is completely foreign to them.
This is not to take away the capabilities of today’s artificial intelligence. Google, for example, is able to search vast amounts of information incredibly quickly to deliver the results the user wants (at least most of the time). Personal assistants like Siri can make restaurant reservations, find and read emails, and give instructions in real time. This list is constantly expanding and improving.
But no matter how complex these programs are, they still look for input and respond with specific outputs that are entirely dependent on their core data sets. If not convinced, ask a customer service bot an "unplanned" question and the bot may generate a meaningless response or no response at all.
In short, Google, Siri, or any other current example of AI lacks true, common-sense understanding, which will ultimately prevent them from moving towards Artificial General Intelligence. The reason goes back to the dominant assumption underlying most AI developments over the past 50 years, which is that if hard problems can be solved, easy intelligence problems will be solved. This hypothesis can be described as Moravec's Paradox, which holds that it would be relatively easy to get computers to perform at an adult level on intelligence tests, but give them the perception and action abilities of a one-year-old baby The skills are difficult.
Artificial intelligence researchers are also wrong in their assumption that if enough narrow AI applications are built, they will eventually grow together into general intelligence. Unlike the way children can effortlessly integrate vision, language and other senses, narrow AI applications cannot store information in a general way, allowing the information to be shared and subsequently used by other AI applications.
Finally, researchers mistakenly believe that if a large enough machine learning system and sufficient computer power can be built, it will spontaneously exhibit general intelligence. This also proved to be wrong. Just as expert systems trying to capture domain-specific knowledge cannot create enough case and example data to overcome an underlying lack of understanding, AI systems cannot handle “unplanned” requests, no matter their size.
General Artificial Intelligence Basics
To achieve true AI understanding, researchers should turn their attention to developing a basic, underlying AGI technology that replicates human understanding of context. understand. Consider, for example, the situational awareness and situational understanding a 3-year-old displays while playing with blocks. 3-year-olds understand that blocks exist in a three-dimensional world, have physical properties such as weight, shape, and color, and will fall if stacked too high. Children also understand the concepts of cause and effect and the passage of time, as blocks cannot be knocked down before they are stacked first.
A 3-year-old can also become a 4-year-old, then a 5-year-old, then a 10-year-old, and so on. Simply put, 3-year-olds are born with abilities that include the ability to grow into fully functional, generally intelligent adults. Such growth is impossible with today’s artificial intelligence. No matter how sophisticated it is, today's artificial intelligence remains completely unaware of its existence in its environment. It does not know that actions taken now will affect future actions.
While it is unrealistic to think that an artificial intelligence system that has never experienced anything outside of its own training data can understand the concepts of the real world, adding mobile sensory pods to artificial intelligence can allow artificial entities to escape from reality. Learn in the environment and demonstrate a basic understanding of physical objects, cause and effect, and the passage of time in reality. Like that 3-year-old, this artificial entity equipped with sensory pods is able to directly learn how to stack blocks, move objects, perform a sequence of actions over time, and learn from the consequences of those actions.
Through sight, hearing, touch, manipulators, and more, artificial entities can learn to understand in ways that are simply not possible with text-only or image-only systems. As mentioned before, such systems simply cannot understand and learn, no matter how large and varied their data sets are. Once an entity acquires this ability to understand and learn, it may even be possible to remove the sensory pods.
Although at this point we cannot quantify how much data is needed to represent true understanding, we can speculate that there must be a reasonable ratio in the brain related to understanding. After all, humans interpret everything in the context of everything they have already experienced and learned. As adults, we interpret everything in terms of what we learned in the first few years of life. With this in mind, it seems that true artificial general intelligence will only be possible if the AI community recognizes this fact and takes the necessary steps to establish a basic foundation of understanding.
The above is the detailed content of The future of artificial intelligence: general artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



The humanoid robot Ameca has been upgraded to the second generation! Recently, at the World Mobile Communications Conference MWC2024, the world's most advanced robot Ameca appeared again. Around the venue, Ameca attracted a large number of spectators. With the blessing of GPT-4, Ameca can respond to various problems in real time. "Let's have a dance." When asked if she had emotions, Ameca responded with a series of facial expressions that looked very lifelike. Just a few days ago, EngineeredArts, the British robotics company behind Ameca, just demonstrated the team’s latest development results. In the video, the robot Ameca has visual capabilities and can see and describe the entire room and specific objects. The most amazing thing is that she can also

In the field of industrial automation technology, there are two recent hot spots that are difficult to ignore: artificial intelligence (AI) and Nvidia. Don’t change the meaning of the original content, fine-tune the content, rewrite the content, don’t continue: “Not only that, the two are closely related, because Nvidia is expanding beyond just its original graphics processing units (GPUs). The technology extends to the field of digital twins and is closely connected to emerging AI technologies. "Recently, NVIDIA has reached cooperation with many industrial companies, including leading industrial automation companies such as Aveva, Rockwell Automation, Siemens and Schneider Electric, as well as Teradyne Robotics and its MiR and Universal Robots companies. Recently,Nvidiahascoll

Editor of Machine Power Report: Wu Xin The domestic version of the humanoid robot + large model team completed the operation task of complex flexible materials such as folding clothes for the first time. With the unveiling of Figure01, which integrates OpenAI's multi-modal large model, the related progress of domestic peers has been attracting attention. Just yesterday, UBTECH, China's "number one humanoid robot stock", released the first demo of the humanoid robot WalkerS that is deeply integrated with Baidu Wenxin's large model, showing some interesting new features. Now, WalkerS, blessed by Baidu Wenxin’s large model capabilities, looks like this. Like Figure01, WalkerS does not move around, but stands behind a desk to complete a series of tasks. It can follow human commands and fold clothes

This week, FigureAI, a robotics company invested by OpenAI, Microsoft, Bezos, and Nvidia, announced that it has received nearly $700 million in financing and plans to develop a humanoid robot that can walk independently within the next year. And Tesla’s Optimus Prime has repeatedly received good news. No one doubts that this year will be the year when humanoid robots explode. SanctuaryAI, a Canadian-based robotics company, recently released a new humanoid robot, Phoenix. Officials claim that it can complete many tasks autonomously at the same speed as humans. Pheonix, the world's first robot that can autonomously complete tasks at human speeds, can gently grab, move and elegantly place each object to its left and right sides. It can autonomously identify objects

The following 10 humanoid robots are shaping our future: 1. ASIMO: Developed by Honda, ASIMO is one of the most well-known humanoid robots. Standing 4 feet tall and weighing 119 pounds, ASIMO is equipped with advanced sensors and artificial intelligence capabilities that allow it to navigate complex environments and interact with humans. ASIMO's versatility makes it suitable for a variety of tasks, from assisting people with disabilities to delivering presentations at events. 2. Pepper: Created by Softbank Robotics, Pepper aims to be a social companion for humans. With its expressive face and ability to recognize emotions, Pepper can participate in conversations, help in retail settings, and even provide educational support. Pepper's

Sweeping and mopping robots are one of the most popular smart home appliances among consumers in recent years. The convenience of operation it brings, or even the need for no operation, allows lazy people to free their hands, allowing consumers to "liberate" from daily housework and spend more time on the things they like. Improved quality of life in disguised form. Riding on this craze, almost all home appliance brands on the market are making their own sweeping and mopping robots, making the entire sweeping and mopping robot market very lively. However, the rapid expansion of the market will inevitably bring about a hidden danger: many manufacturers will use the tactics of sea of machines to quickly occupy more market share, resulting in many new products without any upgrade points. It is also said that they are "matryoshka" models. Not an exaggeration. However, not all sweeping and mopping robots are

In the blink of an eye, robots have learned to do magic? It was seen that it first picked up the water spoon on the table and proved to the audience that there was nothing in it... Then it put the egg-like object in its hand, then put the water spoon back on the table and started to "cast a spell"... …Just when it picked up the water spoon again, a miracle happened. The egg that was originally put in disappeared, and the thing that jumped out turned into a basketball... Let’s look at the continuous actions again: △ This animation shows a set of actions at 2x speed, and it flows smoothly. Only by watching the video repeatedly at 0.5x speed can it be understood. Finally, I discovered the clues: if my hand speed were faster, I might be able to hide it from the enemy. Some netizens lamented that the robot’s magic skills were even higher than their own: Mag was the one who performed this magic for us.

"The Legend of Zelda: Tears of the Kingdom" became the fastest-selling Nintendo game in history. Not only did Zonav Technology bring various "Zelda Creator" community content, but it also became the United States' A new engineering course at the University of Maryland (UMD). Rewrite: The Legend of Zelda: Tears of the Kingdom is one of Nintendo's fastest-selling games on record. Not only does Zonav Technology bring rich community content, it has also become part of the new engineering course at the University of Maryland. This fall, Associate Professor Ryan D. Sochol of the University of Maryland opened a course called "
