Just now, Pika released a new feature:
Sorry we have been muted before.
Starting from today, everyone can seamlessly generate sound effects for videos - Sound Effects!
There are two ways to generate:
And Pika said very confidently: "If you think the sound effect sounds great, that's because it is."
The sound of cars, radios, eagles, swords, cheers... it can be said that the sound is endless, and in terms of effect, it is also highly consistent with the video picture.
Not only has the promotional video been released, but Pika’s official website has also released multiple demos.
For exampleWithout any prompt, the AI just watched the video of roasting bacon and can match the sound effects without any sense of violation.
Another prompt:
Super saturated color, fireworks over a field at sunset.
Super saturated color, fireworks over a field at sunset.
Pika can generate video and add sound at the same time. It is not difficult to see from the effect that the sound stuck at the moment when the fireworks bloom is also quite accurate.
Such a new feature was released during the big weekend. While netizens were shouting Pika "It's so curly and awesome" , some people also thought:
It's collecting all the "infinity stones" for multi-modal AI creation.
So let’s continue to look at how to operate Pika’s Sound Effects.
Pika’s operation of generating sound effects for videos is also extreme! That! simple! one!
For example, with just one prompt, video and sound effects can be "produced in one pot":
Mdieval trumpet player .
Medieval trumpeter.
#Compared with the previous operation of generating video, now you only need to turn on the "Sound effects" button below.
The second method of operation is to dub it separately after generating the video.
For example, in the video below, click "Edit" below, and then select "Sound Effects":
Then you can describe the sound you want, for example:
Race car revving its engine.
The car is starting its engine.
Then in just a few seconds, Pika can generate sound effects based on the description and video, and there are 6 sounds to choose from!
It is worth mentioning that the Sound Effects function is currently only open for testing to Super Collaborator (Super Collaborator) and Pro users.
However, Pika also said: "We will launch this feature to all users soon!"
Now a group of netizens have started testing this Beta version and said:
The sound effects sound very suitable for the video and add a lot of atmosphere.
As for the principle behind Sound Effects, although Pika has not made it public this time, after Sora became popular, the voice startup company ElevenLabs has produced a similar dubbing function.
At that time, NVIDIA senior scientist Jim Fan made a more in-depth analysis of this.
He believes that AI learning accurate video to audio mapping also requires modeling some "implicit" physics in the latent space.
He detailed the problems that the end-to-end Transformer needs to solve when simulating sound waves:
All of this is not an explicit module, but is achieved by gradient descent learning from a large number of (video, audio) pairs, which are naturally found in most Internet videos. Time alignment. Attention layers will implement these algorithms in their weights to meet the diffusion goal.
In addition, Jim Fan said at the time that Nvidia’s related work did not have such a high-quality AI audio engine, but he recommended a paper from MIT five years ago The Sound of Pixels:
Interested friends can click on the link at the end of the article to learn more.
In terms of multimodal, LeCun’s views in the latest interview are also very popular. He believes:
Language (text) is low-bandwidth : less than 12 bytes/second. Modern LLMs typically use 1x10^13 double-byte tokens (i.e. 2x10^13 bytes) for training. It would take a human being approximately 100,000 years (12 hours a day) to read.
Visual bandwidth is much higher: about 20MB/s. Each of the two optic nerves has 1 million nerve fibers, each carrying about 10 bytes per second. A 4-year-old child spends about 16,000 hours in the awake state, which is about 1x10^15 when converted into bytes.
The data bandwidth of visual perception is approximately 16 million times that of text language data bandwidth.
The data a 4-year-old child sees is 50 times the largest LLM data for all text training publicly available on the Internet.
Thus, LeCun concluded:
If machines are not allowed to learn from high-bandwidth sensory input (such as vision), There is absolutely no way we can achieve human-level artificial intelligence.
So, do you agree with this view?
The above is the detailed content of Pika's amplification trick: starting from today, video and sound effects can be produced 'in one pot'!. For more information, please follow other related articles on the PHP Chinese website!