


Wall-E the robot is here! Disney unveils new robot, uses RL to learn to walk, and can also interact socially
Dang, dang, dang, "WALL-E Robot" appears!
It has a flat head and a boxy body. If you point it at the ground, it will tilt its head to express confusion.
However, it is not WALL-E, the real WALL-E looks like this!
This cute little robot was developed by the Disney Research team at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Detroit on display.
This little robot launched by Disney immediately attracted people’s curious eyes as soon as it appeared
Extremely With its expressive head, two swaying antennae "antennas", and short calves, the child-sized body incorporates a lot of emotional body language.
Let everyone who sees it be filled with love and affection.
The difference between this robot and other small bipedal robots is its unique way of walking - it makes sounds when moving, making it feel like a unique little life
Sure enough, after the video on IROS was uploaded to the Internet, netizens were all adorable.
"It's so cute, love, love, love."
He is very cute
There are also a large number of netizens who want to take it home directly.
「I need one!」
「I Can’t wait to see it walking around my room!"
Let’s see how this little robot does next It was created
The Journey to the Birth of Disney’s Little Robot
Programming robots to express emotional movements is Disney’s specialty.
In 1971, Disney World’s Hall of Presidents began using animated robotics technology
But as robots became more advanced and mobile Even stronger, it is increasingly difficult for robot designers and robot animators to develop affective behaviors that both exploit and are compatible with real-world constraints.
Disney Research has spent the past year developing a new system that leverages reinforcement learning techniques to transform animators’ imaginations into expressive movements
These actions are powerful enough to work in almost any venue, whether at the International Conference on Robotics and Intelligent Systems (IROS), at a Disney theme park, or in the woods of Switzerland.... ..
The function of this new system is to allow robots to show more emotions and expressions, thereby making them more attractive and useful in different situations
This The robot was developed by a Disney research team in Zurich, led by Moritz Bächer. It is mainly manufactured through 3D printing, using modular hardware and actuators
This design allows it to be developed and improved very quickly, from initial concept to on-board As seen in the video, it was completed in less than a year.
This robot has a four-degree-of-freedom head that can look up, down, left, and right, and tilt.
In addition, it has a five-degree-of-freedom leg with a hip joint, allowing it to walk while maintaining dynamic balance
This enables the robot to perform more complex actions and interactions with higher flexibility.
At Disney, that’s not enough. Our robots need to have the ability to walk, jump, trot, or roam gracefully to deliver the emotions we need
Disney has professional animators who are good at expressing emotions through movement, as well as roboticists who are good at building mechanical systems.
“What we’re trying to bring to these robots actually stems from our history of character animation,” explains Michael Hopkins, Disney’s chief development engineer.
"We have an animator on our team and together we are able to leverage their knowledge and our technical expertise to create the best possible performance."
Creating an effective robot character requires combining the talents of an animator and a roboticist
This is a fairly time-consuming process that involves a lot of experimentation and errors to ensure the robot communicates the animator's intent without falling over.
"It's not just about walking. Walking is just one of the inputs to a reinforcement learning system, but another important input is how it walks," Pope added.
To bridge this gap, Disney Research developed a system based on reinforcement learning
The system uses simulation technology to combine the animator's vision with the powerful robot movement and strike a balance between the two. Realizing the constraints of the physical world allows animators to develop highly expressive movements.
These artists want their imagined movements to come to life as close to the physical limits of robots as possible
Disney’s assembly line can Training a robot on a new behavior on a PC can take years of training in just a few hours
#########In addition, reinforcement learning makes the movements produced by small robots extremely stable. ######
This system developed by Disney has the ability to repeatedly train movements, and can also fine-tune aspects such as motor performance, mass distribution, and friction between the robot and the ground
The content is rewritten as follows: This system can ensure that no matter what situation the robot encounters in the real world, it knows how to deal with it and can show corresponding emotions. It’s very important for a robot to maintain its own personality
Next destination
Disney’s Researchers are proud to say that although this little robot is very cute, what is more important is the system it relies on
This system makes the little robot so lively and cute, and it is also the future journey of Disney A promising first step in
Disney’s next plan is to use this technology to develop more physical robot characters and push the envelope with faster, more dynamic movements.
While the little robot doesn't have an official name yet, Disney says it's just a prelude to more animated robots to come.
It’s really exciting.
The above is the detailed content of Wall-E the robot is here! Disney unveils new robot, uses RL to learn to walk, and can also interact socially. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



DDREASE is a tool for recovering data from file or block devices such as hard drives, SSDs, RAM disks, CDs, DVDs and USB storage devices. It copies data from one block device to another, leaving corrupted data blocks behind and moving only good data blocks. ddreasue is a powerful recovery tool that is fully automated as it does not require any interference during recovery operations. Additionally, thanks to the ddasue map file, it can be stopped and resumed at any time. Other key features of DDREASE are as follows: It does not overwrite recovered data but fills the gaps in case of iterative recovery. However, it can be truncated if the tool is instructed to do so explicitly. Recover data from multiple files or blocks to a single

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile
