Table of Contents
Spatial intelligence, allowing AI to understand the real world
The evolution of biological vision
The rise of computer vision
Spatial Intelligence: Just looking is not enough
In the primitive oceans of ancient times, the ability to see and sense the environment triggered the desire to interact with other life forms Cambrian explosion.
Home Technology peripherals AI Li Feifei reveals the entrepreneurial direction of 'spatial intelligence': visualization turns into insight, seeing becomes understanding, and understanding leads to action

Li Feifei reveals the entrepreneurial direction of 'spatial intelligence': visualization turns into insight, seeing becomes understanding, and understanding leads to action

Jun 01, 2024 pm 02:55 PM
Model train

After Stanford Li Feifei started his business, he revealed the new concept "spatial intelligence" for the first time.

This is not only her entrepreneurial direction, but also the "North Star" that guides her. She considers it to be "the key puzzle to solve the artificial intelligence problem."

Visualization becomes insight; seeing becomes understanding; understanding leads to action.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

Based on the complete disclosure of Li Feifei’s 15-minute TED speech, from the origin of the evolution of life hundreds of millions of years ago to how humans We are not satisfied with the development of artificial intelligence given by nature until the next step is to build spatial intelligence.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

Nine years ago, Li Feifei introduced the newly born ImageNet to the world on the same stage - one of the starting points for this round of deep learning explosion.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

She herself also expressed her confidence to netizens: If watches both videos, you will be able to understand computer vision and space in the past 10 years. Intelligence and AI have a good understanding.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

Below, we will sort out the content of Li Feifei’s speech without changing its original meaning.

Spatial intelligence, allowing AI to understand the real world

The evolution of biological vision

Let me show you something, exactly, I will show you "Nothing".

This is the world 540 million years ago. Pure, endless darkness. It's not dark because of lack of light. It is dark because of the lack of vision.

Although sunlight can penetrate 1,000 meters below the surface of the ocean, and light from hydrothermal vents reaches the bottom of the sea, teeming with life, there is not a single eye to be found in these ancient waters.

No retina, no cornea, no lens. So all this light, all this life, remains unseen.

There was a time when the concept of "seeing" did not exist. It had never been realized until it was.

For reasons we are only beginning to understand, the first organisms that could sense light appeared—trilobites. They are the first creatures capable of sensing the reality we take for granted. They were the first creatures to discover that there was something other than themselves.

For the first time, the world is filled with many "selves".

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

Visual abilities are thought to have triggered the Cambrian explosion, a period when animal species entered the fossil record in large numbers. What begins as a passive experience, the simple act of letting in light, soon becomes more active and the nervous system begins to evolve.

Vision becomes insight. Seeing becomes understanding. Understanding leads to action.

All these give rise to intelligence.

The rise of computer vision

Today, we are no longer satisfied with the visual capabilities given by nature. Curiosity drives us to create machines that can see at least as well as we do, if not better.

Nine years ago, on this stage, I submitted an early progress report on computer vision.

At that time, three powerful forces came together for the first time:

  • A class of algorithms called Neural Networks
  • Fast, specialized hardware, called a graphics processing unit, or GPU
  • plus Big Data, such as the 15 million images my lab spent several years sorting , called ImageNet.

Together they ushered in the modern era of artificial intelligence.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

We have come a long way from then to now.

In the beginning, just labeling images was a major breakthrough, but the speed and accuracy of the algorithm quickly improved.

This progress is measured in the annual ImageNet Challenge hosted by my lab. In this chart, you can see the improvement in model capabilities each year, and some of the milestone models.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

We went a step further and created algorithms that can segment visual objects or predict dynamic relationships between them, work done by my students and collaborators.

there are more.

Recall the first computer vision algorithm I showed in my last speech. AI can describe a photo using human natural language. That's what I did with my brilliant student Andrej Karpathy.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

At that time, I boldly said: "Andrej, can we make the computer do the opposite?" Andrej smiled and said: "Haha, that's impossible. ."

Well, as you can see today, the impossible has become possible.

This is thanks to a series of diffusion models that power today’s generative AI algorithms, which can transform human prompt words into photos and videos to create entirely new things.

Many of you have seen OpenAI’s Sora achieve impressive results recently. However, a few months ago, without a lot of GPUs, my students and collaborators developed an AI video generation model called Walt.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to actionWalt Published in December 2023

There is room for improvement here, look at that cat’s eyes, It never got wet under the waves, what a disaster~(cat-astrophe).

(Homophonous stalks deduct money!)

Spatial Intelligence: Just looking is not enough

The past is a prologue, we will learn from these mistakes Learn and create the future we imagine. In this future, we want AI to do everything it can to do things for us, or help us do things.

For years I have been saying that taking pictures is not the same thing as seeing and understanding. Today, I would like to add one more point: just looking is not enough.

Look, for action and learning.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

When we act in 3D space and time, we learn, we learn to see better and do things better. Nature creates a virtuous cycle of seeing and acting through "spatial intelligence."

To demonstrate what spatial intelligence is, take a look at this photo. If you feel the urge to do something, raise your hand.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

In a split second, your brain observes the geometry of the cup, its position in 3D space, its relationship to the table, the cat, and all other objects, and You can predict what will happen next.

The urge to act is inherent in all creatures with spatial intelligence, which links perception to action.

If we want AI to go beyond current capabilities, we not only want AI that can see and speak, we want AI that can act.

In fact, we are making exciting progress.

The latest milestone in spatial intelligence is teaching computers to see, learn, act, and learn to see and act better.

And it’s not easy.

Nature has spent millions of years evolving spatial intelligence. The eye captures light and projects a 2D image onto the retina, and the brain converts this data into 3D information.

Until recently, a group of researchers from Google developed an algorithm to convert a set of photos into a 3D space.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

My students and collaborators went a step further and created an algorithm that turns a single image into a 3D shape.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

#A team of researchers at the University of Michigan has found a way to convert sentences into 3D room layouts.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

My colleague at Stanford University and his students have developed an algorithm that can generate an infinite space of possibilities from a single image for viewers to explore.

These are prototypes of future possibilities. Within this possibility, humans can transform our entire world into digital form and simulate its richness and subtlety.

What nature does implicitly in each of our minds, spatial intelligence technology promises to do the same thing for our collective consciousness.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action#With the accelerated progress of spatial intelligence, a new era is unfolding before our eyes in this virtuous cycle. This cycle is catalyzing robot learning, a key component of any embodied intelligence system that needs to understand and interact with the 3D world.

Ten years ago, my lab’s ImageNet enabled a database of millions of high-quality photos to help train computer vision.

Today we are doing something similar,

training computers and robots how to act in a 3D world

. This time instead of collecting static images, we develop simulation environments driven by 3D spatial models so that the computer can learn the infinite possibilities of actions.

What you just saw is a small sample of the robots that teach us, a project led by my lab called Behavior.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to actionWe are also making exciting progress in robotic language intelligence.

Using input based on large language models, my students and collaborators are one of the first teams to demonstrate that a robotic arm can perform a variety of tasks based on verbal instructions.

Like opening this drawer or unplugging the phone cord. Or make a sandwich using bread, lettuce, tomatoes, or even place a napkin for the user. Normally I'd like a sandwich that's a little more substantial, but this is a good place to start.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to actionThe application prospects of spatial intelligence

In the primitive oceans of ancient times, the ability to see and sense the environment triggered the desire to interact with other life forms Cambrian explosion.

Today, that light is reaching digital thinking.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action Spatial intelligence allows machines to interact not only with each other, but also with humans, and the real or virtual 3D world.

As this future takes shape, it will have a profound impact on many lives.

Let’s take healthcare as an example. Over the past decade, my lab has been conducting initial efforts to apply AI to challenges that impact patient outcomes and healthcare staff fatigue.

Together with collaborators from Stanford School of Medicine and other partner hospitals, we are piloting smart sensors that can detect if a clinician enters a patient room without properly washing their hands. Or tracking surgical instruments, or alerting care teams when a patient is at physical risk, such as a fall.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to actionWe think of these technologies as a form of ambient intelligence, like

extra eyes.

But I would prefer more interactive assistance for our patients, clinicians and caregivers who desperately need an extra pair of hands.

Imagine an autonomous robot transporting medical supplies while caregivers focus on the patient, or using augmented reality to guide surgeons through safer, faster, less invasive procedures.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

Also imagine that severely paralyzed patients could control robots with their thoughts. That’s right, using brain waves to perform the everyday tasks you and I take for granted.

This is a recent pilot study conducted in my lab. In this video, a robotic arm, controlled solely by electrical signals from the brain, is cooking a Japanese sukiyaki meal. where signals are collected non-invasively through an EEG cap.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

Five hundred million years ago, the emergence of vision overturned the dark world and triggered the most profound evolutionary process: the development of intelligence in the animal world.

The progress of AI in the past decade has been equally amazing. But I believe that the full potential of this Digital Cambrian Explosion will not be fully realized until we empower computers and robots with spatial intelligence, just as nature has done for all of us.

It’s an exciting time to teach our digital companions how to reason and interact with this beautiful 3D space we call home, while also creating more new worlds we can explore.

Achieving this future won’t be easy, it requires all of us to think deeply and develop technology that always puts people at the center.

But if we do it right, computers and robots powered by spatial intelligence will become not only useful tools, but also trusted partners, making us more productive and empowering while respecting the dignity of the individual. humanity and enhance our collective prosperity.

Li Feifei reveals the entrepreneurial direction of spatial intelligence: visualization turns into insight, seeing becomes understanding, and understanding leads to action

The future I am most excited about is one in which AI becomes more sentient, insightful, and spatially aware, and joins us in our pursuit of creating a better world. method.

(Full text ends)

Video playback: https://www.ted.com/talks/fei_fei_li_with_spatial_intelligence_ai_will_understand_the_real_world/transcript


The above is the detailed content of Li Feifei reveals the entrepreneurial direction of 'spatial intelligence': visualization turns into insight, seeing becomes understanding, and understanding leads to action. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Jun 11, 2024 am 09:51 AM

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

See all articles