


Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly
With Alibaba’s EMO, it has become easier to “move, speak or sing” with AI-generated or real images.
Recently, the Vincent video model represented by OpenAI Sora has become popular again.
In addition to text-based video generation, human-centered video synthesis has always attracted much attention. For example, focus on “speaker head” video generation, where the goal is to generate facial expressions based on user-provided audio clips.
On a technical level, generating expressions requires accurately capturing the subtle and diverse facial movements of the speaker, which is a huge challenge for similar video synthesis tasks.
Traditional methods usually impose some limitations to simplify the video generation task. For example, some methods utilize 3D models to constrain facial key points, while others extract head motion sequences from raw videos to guide overall motion. While these limitations reduce the complexity of video generation, they also limit the richness and naturalness of the final facial expressions.
In a recent paper published by Ali Intelligent Computing Research Institute, researchers focused on exploring the subtle connection between audio cues and facial movements to improve the authenticity, naturalness and accuracy of the speaker’s head video. expressiveness.
Researchers have found that traditional methods often fail to adequately capture the facial expressions and unique styles of different speakers. Therefore, they proposed the EMO (Emote Portrait Alive) framework, which directly renders facial expressions through an audio-video synthesis method without using intermediate 3D models or facial landmarks.
Paper title: EMO: Emote Portrait Alive- Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
-
Paper address: https://arxiv.org/pdf/2402.17485.pdf
Project homepage: https://humanaigc.github.io/emote-portrait-alive/
In terms of effect, Alibaba’s method can ensure seamless frame transitions throughout the video and maintain identity consistency, thereby producing expressive and more realistic character avatar videos that are more expressive in performance. Significantly better than the current SOTA method in terms of power and realism.
For example, EMO can make the Tokyo girl character generated by Sora sing. The song is "Don't Start Now" sung by the British/Albanian dual-national female singer Dua Lipa. EMO supports songs in different languages including English and Chinese. It can intuitively identify the tonal changes of the audio and generate dynamic and expressive AI character avatars. For example, let the young lady generated by the AI painting model ChilloutMix sing "Melody" by Tao Zhe.
#EMO can also allow the avatar to keep up with fast-paced Rap songs, such as asking DiCaprio to perform a section of "Godzilla" by American rapper Eminem. Of course, EMO not only allows characters to sing, but also supports spoken audio in various languages, turning different styles of portraits, paintings, as well as 3D models and AI-generated content into lifelike animated videos . Such as Audrey Hepburn's talk.
Finally, EMO can also achieve linkage between different characters, such as Gao Qiqiang linking up with Teacher Luo Xiang in "Cyclone".
Method Overview
Given a single reference image of a character's portrait, our method can generate a video that is synchronized with the input speech audio clip, while also retaining the character's very natural head movements and vivid expressions, and matching the pitch of the provided voice audio. Coordinate changes. By creating a seamless series of cascading videos, the model helps generate long videos of talking portraits with consistent identity and coherent motion, which are critical for real-world applications.
Network Pipeline
The overview of the method is shown in the figure below. The backbone network receives multiple frames of noise potential input and attempts to denoise them into consecutive video frames at each time step. The backbone network has a similar UNet structure configuration to the original SD 1.5 version, specifically
Similar to previous work, in order to ensure continuity between generated frames, the backbone network embeds a temporal module.
In order to maintain the ID consistency of the portraits in the generated frames, the researchers deployed a UNet structure parallel to the backbone network, called ReferenceNet, which inputs the reference image to obtain the reference features.
In order to drive the movement of the character when speaking, the researchers used an audio layer to encode the sound characteristics.
In order to make the speaking character's movements controllable and stable, the researchers used face locators and velocity layers to provide weak conditions.
For the backbone network, the researchers did not use hint embedding, so they adjusted the cross-attention layer in the SD 1.5 UNet structure to the reference attention force layer. These modified layers will take reference features obtained from ReferenceNet as input instead of text embeddings.
Training strategy
The training process is divided into three stages:
The first stage is image pre-training, in which the backbone network, ReferenceNet and facial positioning The network is incorporated into the training process, where the backbone network takes a single frame as input, while the ReferenceNet processes different, randomly selected frames from the same video clip. Both Backbone and ReferenceNet initialize weights from raw SD.
In the second stage, the researchers introduced video training, added a temporal module and an audio layer, and sampled n f consecutive frames from the video clip, of which the first n frames were motion frames. The time module initializes the weights from AnimateDiff.
The last stage integrates the speed layer, and the researcher only trains the time module and speed layer in this stage. This approach is done to intentionally ignore the audio layer during training. Because the frequency of the speaker's expression, mouth movement, and head movement is mainly affected by the audio. Therefore, there appears to be a correlation between these elements, and the model may drive the character's movement based on velocity signals rather than audio. Experimental results show that training the speed layer and the audio layer simultaneously weakens the ability of audio to drive character movement.
Experimental results
The methods involved in the comparison during the experiment include Wav2Lip, SadTalker, and DreamTalk.
Figure 3 shows the comparison results of this method with previous methods. It can be observed that when provided with a single reference image as input, Wav2Lip typically synthesizes a blurred mouth region and generates videos characterized by static head poses and minimal eye movements. In the case of DreamTalk, the results can distort the original face and also limit the range of facial expressions and head movements. Compared with SadTalker and DreamTalk, the method proposed in this study is able to generate a larger range of head movements and more vivid facial expressions.
The study further explores avatar video generation in various portrait styles, such as realistic, anime, and 3D. The characters were animated using the same vocal audio input, and the results showed that the resulting videos produced roughly consistent lip sync across the different styles.
Figure 5 shows that our method can generate richer facial expressions and actions when processing audio with obvious tonal characteristics. For example, in the third line of the picture below, a high pitch will trigger a stronger, more vivid expression in the character. Additionally, motion frames allow you to extend the generated video, i.e. generate a longer duration video based on the length of the input audio. As shown in Figures 5 and 6, our method preserves the character's identity in extended sequences even during large movements.
Table 1 The results show that this method has significant advantages in video quality assessment:
The above is the detailed content of Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this
