Home Technology peripherals AI Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!

Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!

Jun 19, 2024 am 09:36 AM
industry Runway

The great situation that the AI ​​​​circle is blooming everywhere has surprised the melon-eating public.

# In the past few days, the other side of the ocean has been crazy!

#Luma’s excitement hasn’t passed yet, last night Runway released a king bomb-Gen-3 Alpha. (For details, please go to: Runway version Sora released: high fidelity, super consistency, Gen-3 Alpha shocked netizens)

What was even more unexpected was that when I woke up, Google DeepMind also had new news, quietly announcing the progress of video-to-speech (V2A) technology.
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
Although this feature is not yet open to the public, judging from the official video demo, the effect is quite smooth. At the same time, Google DeepMind emphasized that all examples were jointly created by V2A technology and their most advanced generative video model Veo.

Audio prompt: An exciting horror movie soundtrack, footsteps echoing on the concrete. (Cinematic, thriller, horror film, music, tension, ambience, footsteps on concrete) Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
In the dark abandoned warehouse, a man in black walks slowly like a ghost. There is weird music and footsteps, and the atmosphere is full of terror.

Audio prompt: The wolf howls in the moonlight. (Wolf howling at the moon) Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
As soon as the video demo came out, Qing Yishui in the comment area asked: When will it be available?
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
Some netizens hope that the open source community will become a cyber bodhisattva and copy Google’s technology.
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
In fact, not long after Google DeepMind was officially announced, ElevenLabs, the "leader" in the field of AI audio, stepped in and open sourced a project for automatic dubbing of uploaded videos. , which can generate suitable sound effects for videos.
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
Link:
https://elevenlabs.io/docs/api-reference/how-to-use-text-to-sound-effects

Nowadays, the competition in the AI ​​​​circle has become fierce. The pursuit of each other by large and small manufacturers will create a more fair competitive environment. Once these technologies mature, the AI ​​​​video field will There are endless possibilities.
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
##AI Video Farewell to Silent Movies

It is known that video generation models are developing at an astonishing rate. However, whether it is Sora, which shocked the world at the beginning of the year, or the recent Keling, Luma, and Gen-3 Alpha, they are all "silent movies" without exception.

And Google DeepMind’s video-to-audio (V2A) technology makes synchronous audio-visual generation possible. It can combine video pixels and natural language text cues to generate rich voiceovers for on-screen action.

In terms of technical application, V2A technology can be combined with video generation models such as Veo to create dialogues with dramatic soundtracks, realistic sound effects, or matching video characters and styles. lens.

It can also generate audio tracks for archival materials, silent films and other traditional images, broadening creative possibilities.

Audio prompt: Cute baby dinosaurs chirp in the jungle, accompanied by the sound of cracking eggshells. (Cute baby dinosaur chirps, jungle ambience, egg cracking)Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!Audio prompts: The sound of cars skidding and engines roaring, accompanied by angelic electronic music. (cars skidding, car engine throttling, angelic electronic music) Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!Audio prompt: At sunset, the melodious harmonica sounds on the grassland. (a slow mellow harmonica plays as the sun goes down on the prairie) Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
V2A technology is capable of generating an unlimited number of audio tracks for any video input. Users can choose to define "positive cues" to guide the generation of desired sounds, or "negative cues" to avoid undesired sounds.

This flexibility gives users more control over audio output, allowing them to quickly try different audio outputs and choose the best match.

Audio prompt: A spaceship is speeding in the vast space, stars are passing around it, flying at high speed, full of science fiction feeling. (A spaceship hurtles through the vastness of space, stars streaking past it, high speed, Sci-fi) Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!Audio prompt: Ethereal cello atmosphere Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence! Audio prompt: A spaceship shuttles through the vast space at high speed, with stars passing quickly around it, giving it a sci-fi feel. (A spaceship hurtles through the vastness of space, stars streaking past it, high speed, Sci-fi) Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
The working principle behind it

The research team tried autoregressive and diffusion methods to discover the most scalable AI architecture. Diffusion methods give the most realistic and engaging results on audio generation for synchronizing video and audio information.

#The V2A system first encodes the video input into a compressed representation, then a diffusion model iteratively refines the audio from random noise. This process is guided by visual input and given natural language cues, producing synchronized, realistic audio that is tightly aligned with the cues. Finally, the audio output is decoded into an audio waveform and combined with the video data.
Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
To generate higher quality audio and guide the model to generate specific sounds, the research team added more information during the training process, including AI-generated annotations that describe the sounds in detail and dialogue text.

Through training on video, audio and additional annotations, the technology learns to associate specific audio events with various visual scenes, while responding to annotations or text provided Information.

Google emphasizes that their technology is different from existing video-to-audio solutions because it understands raw pixels and adding text hints is optional. In addition, the system does not require manual alignment of generated sound and video, greatly simplifying the creative process.Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
#However, Google’s technology is not perfect, and they are still working hard to solve some bugs. For example, the quality of the video input directly affects the quality of the audio output, and artifacts or distortions in the video can cause audio quality to degrade.

At the same time, they are also optimizing the lip sync function.

V2A technology attempts to generate speech from input text and synchronize it with the character's mouth movements, but if the video model is not adjusted accordingly for the text content, This may cause the mouth shape and speech to be out of sync. They are improving this technology to make lip sync more natural. Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!
Audio prompt: Music, Transcript "This turkey looks amazing, I'm so hungry." 'm so hungry”)

Perhaps due to the many social problems caused by deep forgery technology, Google DeepMind is full of desire to survive and continues to promise to develop and deploy AI responsibly. Technology, V2A technology will undergo rigorous safety assessment and testing before being made available to the public.

Additionally, they have integrated the SynthID toolkit into V2A research to add a watermark to all AI-generated content to prevent misuse of the technology.

Reference link:

https://deepmind.google/ discover/blog/generating-audio-for-video/

https://x.com/GoogleDeepMind/status/1802733643992850760

The above is the detailed content of Killing like crazy! Google rolls video to speech, and realistic sound effects make AI videos farewell to silence!. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

The first large UI model in China is released! Motiff's large model creates the best assistant for designers and optimizes UI design workflow The first large UI model in China is released! Motiff's large model creates the best assistant for designers and optimizes UI design workflow Aug 19, 2024 pm 04:48 PM

Artificial intelligence is developing faster than you might imagine. Since GPT-4 introduced multimodal technology into the public eye, multimodal large models have entered a stage of rapid development, gradually shifting from pure model research and development to exploration and application in vertical fields, and are deeply integrated with all walks of life. In the field of interface interaction, international technology giants such as Google and Apple have invested in the research and development of large multi-modal UI models, which is regarded as the only way forward for the mobile phone AI revolution. In this context, the first large-scale UI model in China was born. On August 17, at the IXDC2024 International Experience Design Conference, Motiff, a design tool in the AI ​​era, launched its independently developed UI multi-modal model - Motiff Model. This is the world's first UI design tool

See all articles