OpenAI’s Sora made a stunning debut in February this year, bringing a new breakthrough to text-generated videos. It can create stunningly realistic and imaginative videos based on text input that look like they came from Hollywood. Many people have marveled at this innovation and believe that OpenAI's performance has achieved the pinnacle.
The craze caused by Sora continues unabated. At the same time, researchers have begun to realize the huge potential of AI video generation technology, and this field is attracting more and more attention.
However, in the current field of AI video generation, most algorithm research focuses on generating videos through text prompts. There is no in-depth discussion or in-depth discussion of multi-modal input, especially scenes where pictures and text are combined. widely used. This bias reduces the variety and controllability of generated videos and limits the ability to convert static images into dynamic videos.
On the other hand, most existing video generation models lack editability support for generated video content and cannot meet users' needs for personalized adjustments to generated videos.
Tips: Transform the panda into a bear and make it dance. (Change the panda to a bear and make it dance.)
In this article, researchers from SEEKING AI, Harvard University, Stanford University and Peking University jointly proposed an innovative image-text-based video Generate and edit a unified framework called WorldGPT. This framework is built on the VisionGPT framework jointly developed by SEEKING AI and the above-mentioned top universities. It can not only realize the function of directly generating videos from pictures and text, but also support style transfer and background replacement of the generated videos through simple text prompts (prompt). and a series of video appearance editing operations.
Another significant advantage of this framework is that it does not require training, which significantly lowers the technical threshold and makes deployment and use very convenient. Users can directly use the model to create without paying attention to the tedious training process behind it.
Next let’s look at examples of WorldGPT in various complex video generation control scenarios.
Prompt: "A fleet of ships struggled forward in a howling storm, their sails sailing against the huge waves of the relentless storm. .(A fleet of ships pressed on through the howling tempest, their sails billowing as they navigated the towering waves of the relentless storm.)》
Prompt: "A cute dragon is spitting fire on an urban street."
Prompt: "A cyberpunk-style robot is A cyberpunk-style automaton raced through the neon-lit, dystopian cityscape, reflections of towering holograms and digital decay projected onto its sleek metal body. and digital decay playing across its sleek, metallic body.)》
As can be seen from the above example, WorldGPT is facing complex The video generation instructions have the following advantages:
1) It better maintains the structure and environment of the original input image;
2) Generates a generated video that conforms to the picture-text description, showing Powerful video generation and customization capabilities;
3) You can customize the generated video through prompt.
To learn more about the principles, experiments, and use cases of WorldGPT, please view the original paper.
As mentioned earlier, the WorldGPT framework is built on the VisionGPT framework. Next we briefly introduce information about VisionGPT.
VisionGPT was jointly developed by SeekingAI, Stanford University, Harvard University, Peking University and other world-leading institutions. It is a groundbreaking open world visual perception large model framework. The framework provides powerful AI multi-modal image processing capabilities through intelligent integration and decision-making selection of state-of-the-art SOTA large models.
The innovation of VisionGPT is mainly reflected in three aspects:
As can be seen from the above, VisionGPT can easily achieve 1) instance segmentation in the open world without fine-tuning; 2) prompt-based image generation and editing functions, etc. The workflow of VisionGPT is shown in the figure below.
For more details, please refer to the paper.
In addition, researchers have also launched VisionGPT-3D, which aims to solve a major challenge in converting text to visual elements: how to efficiently ,Accurately convert 2D images into 3D representations. In this process, we often face the problem of mismatch between the algorithm and actual needs, thus affecting the quality of the final result. VisionGPT-3D proposes a multimodal framework that optimizes this conversion process by integrating multiple state-of-the-art SOTA vision large models. Its core innovation lies in its ability to automatically select the most suitable visual SOTA model and 3D point cloud creation algorithm, and to generate output that best meets user needs based on multi-modal inputs such as text prompts.
For more information, please refer to the original paper.
The above is the detailed content of WorldGPT is here: Create a Sora-like video AI agent, "resurrect" graphics and text. For more information, please follow other related articles on the PHP Chinese website!