Home > Technology peripherals > AI > body text

Pika's amplification trick: starting from today, video and sound effects can be produced 'in one pot'!

WBOY
Release: 2024-03-11 13:00:15
forward
731 people have browsed it

Just now, Pika released a new feature:

Sorry we have been muted before.

Starting from today, everyone can seamlessly generate sound effects for videos - Sound Effects!

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

There are two ways to generate:

  • Either give a prompt and describe the sound you want;
  • Or Directly let Pika automatically generate it based on the video content.

And Pika said very confidently: "If you think the sound effect sounds great, that's because it is."

The sound of cars, radios, eagles, swords, cheers... it can be said that the sound is endless, and in terms of effect, it is also highly consistent with the video picture.

Not only has the promotional video been released, but Pika’s official website has also released multiple demos.

For exampleWithout any prompt, the AI ​​just watched the video of roasting bacon and can match the sound effects without any sense of violation.

Another prompt:

Super saturated color, fireworks over a field at sunset.
Super saturated color, fireworks over a field at sunset.

Pika can generate video and add sound at the same time. It is not difficult to see from the effect that the sound stuck at the moment when the fireworks bloom is also quite accurate.

Such a new feature was released during the big weekend. While netizens were shouting Pika "It's so curly and awesome" , some people also thought:

It's collecting all the "infinity stones" for multi-modal AI creation.

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

So let’s continue to look at how to operate Pika’s Sound Effects.

“make some noise” for videos

Pika’s operation of generating sound effects for videos is also extreme! That! simple! one!

For example, with just one prompt, video and sound effects can be "produced in one pot"

Mdieval trumpet player .
Medieval trumpeter.

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

#Compared with the previous operation of generating video, now you only need to turn on the "Sound effects" button below.

The second method of operation is to dub it separately after generating the video.

For example, in the video below, click "Edit" below, and then select "Sound Effects":

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

Then you can describe the sound you want, for example:

Race car revving its engine.
The car is starting its engine.

Then in just a few seconds, Pika can generate sound effects based on the description and video, and there are 6 sounds to choose from!

It is worth mentioning that the Sound Effects function is currently only open for testing to Super Collaborator (Super Collaborator) and Pro users.

However, Pika also said: "We will launch this feature to all users soon!"

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

Now a group of netizens have started testing this Beta version and said:

The sound effects sound very suitable for the video and add a lot of atmosphere.

What is the principle?

As for the principle behind Sound Effects, although Pika has not made it public this time, after Sora became popular, the voice startup company ElevenLabs has produced a similar dubbing function.

At that time, NVIDIA senior scientist Jim Fan made a more in-depth analysis of this.

He believes that AI learning accurate video to audio mapping also requires modeling some "implicit" physics in the latent space.

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

He detailed the problems that the end-to-end Transformer needs to solve when simulating sound waves:

  1. Identify the category, material and Spatial location.
  2. Recognize higher-order interactions between objects: For example, is it a stick, metal, or drumhead? At what speed does it hit?
  3. Identify the environment: Is it a restaurant, a space station, or Yellowstone Park?
  4. Retrieve typical sound patterns of objects and environments from the model's internal memory.
  5. Use "soft", learned physical rules to combine and adjust the parameters of sound patterns, and even create entirely new sounds on the fly. It's a bit like "procedural audio" in game engines.
  6. If the scene is complex, the model needs to superimpose multiple sound tracks according to the spatial position of the object.

All of this is not an explicit module, but is achieved by gradient descent learning from a large number of (video, audio) pairs, which are naturally found in most Internet videos. Time alignment. Attention layers will implement these algorithms in their weights to meet the diffusion goal.

In addition, Jim Fan said at the time that Nvidia’s related work did not have such a high-quality AI audio engine, but he recommended a paper from MIT five years ago The Sound of Pixels

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

Interested friends can click on the link at the end of the article to learn more.

One More Thing

In terms of multimodal, LeCun’s views in the latest interview are also very popular. He believes:

Language (text) is low-bandwidth : less than 12 bytes/second. Modern LLMs typically use 1x10^13 double-byte tokens (i.e. 2x10^13 bytes) for training. It would take a human being approximately 100,000 years (12 hours a day) to read.

Visual bandwidth is much higher: about 20MB/s. Each of the two optic nerves has 1 million nerve fibers, each carrying about 10 bytes per second. A 4-year-old child spends about 16,000 hours in the awake state, which is about 1x10^15 when converted into bytes.

The data bandwidth of visual perception is approximately 16 million times that of text language data bandwidth.

The data a 4-year-old child sees is 50 times the largest LLM data for all text training publicly available on the Internet.

Pikas amplification trick: starting from today, video and sound effects can be produced in one pot!

Thus, LeCun concluded:

If machines are not allowed to learn from high-bandwidth sensory input (such as vision), There is absolutely no way we can achieve human-level artificial intelligence.

So, do you agree with this view?

The above is the detailed content of Pika's amplification trick: starting from today, video and sound effects can be produced 'in one pot'!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!