Netizen: I can’t even imagine how advanced video technology will be in a year’s time.
The fifty-second preview video once again sparked heated discussions in the AI circle
Yesterday, Runway announced that it will soon launch "Motion Brush" (motion pen) in the video generation tool Gen-2 Brush) function, this is a brand new method that can control the movement of generated content
This time, you don’t even need to enter text, just having your hands is enough .
Choose any one of the given pictures, just apply the brush to a certain place, and that place will immediately move
Whether it is water flow, clouds, flames, smoke or characters, its dynamics can be highly restored. Is this the legendary "Midas that turns stone into gold"?
After reading this, netizens said: I can’t even imagine how advanced video technology will be in one year’s time…
At the beginning of 2023, generating video from text is still a very difficult task
Runway launched Gen-1 in February this year, which is very feature-rich, including stylization, storyboarding, masking, Rendering, customization and more. It seems that this is a tool focused on "editing" videos.
But in March this year, the advent of Gen-2 changed everything. It added the function of generating video from text and pictures. Users only need to enter text, images, or text plus image descriptions, and Gen-2 can generate relevant videos in a fraction of the time.
This is the first text-to-video model on the market available for public use. For example, when you enter a piece of plain text "The afternoon sun shines through the windows of a New York loft", Gen-2 will directly generate the corresponding video
by using several With prompts and gestures, we are now able to generate high-quality videos and perform further editing without the need for complex video editing software and lengthy production processes
If you combine Midjourney and Gen-2 with When used together with Vincent's video artifacts, users can create blockbuster works directly without using a pen. Of course, Gen-2 also has competitors, one of which is Pika Labs, and the latter is free.
The above picture was generated by Pika LabsFor this crazy scrolling situation, some users are Very much looking forward to: "In 2024, the pull between Pika Labs and Runway will be interesting."
Rumor has it that OpenAI also has technology related to video generation. Some netizens said: "This makes me wonder how good OpenAI's any-to-any model is in generating videos, because this company is usually ahead of others."
Will the future video and film production industry be subverted by this?
The above is the detailed content of Runway's new feature 'Dynamic Brush' once again shocked the AI world: just paint lightly and the picture immediately comes alive.. For more information, please follow other related articles on the PHP Chinese website!