


WorldGPT is here: Create a Sora-like video AI agent, "resurrect" graphics and text
OpenAI’s Sora made a stunning debut in February this year, bringing a new breakthrough to text-generated videos. It can create stunningly realistic and imaginative videos based on text input that look like they came from Hollywood. Many people have marveled at this innovation and believe that OpenAI's performance has achieved the pinnacle.
The craze caused by Sora continues unabated. At the same time, researchers have begun to realize the huge potential of AI video generation technology, and this field is attracting more and more attention.
However, in the current field of AI video generation, most algorithm research focuses on generating videos through text prompts. There is no in-depth discussion or in-depth discussion of multi-modal input, especially scenes where pictures and text are combined. widely used. This bias reduces the variety and controllability of generated videos and limits the ability to convert static images into dynamic videos.
On the other hand, most existing video generation models lack editability support for generated video content and cannot meet users' needs for personalized adjustments to generated videos.
Tips: Transform the panda into a bear and make it dance. (Change the panda to a bear and make it dance.)
In this article, researchers from SEEKING AI, Harvard University, Stanford University and Peking University jointly proposed an innovative image-text-based video Generate and edit a unified framework called WorldGPT. This framework is built on the VisionGPT framework jointly developed by SEEKING AI and the above-mentioned top universities. It can not only realize the function of directly generating videos from pictures and text, but also support style transfer and background replacement of the generated videos through simple text prompts (prompt). and a series of video appearance editing operations.
Another significant advantage of this framework is that it does not require training, which significantly lowers the technical threshold and makes deployment and use very convenient. Users can directly use the model to create without paying attention to the tedious training process behind it.
- Paper address: https://arxiv.org/pdf/2403.07944.pdf
- Paper title: WorldGPT: A Sora-Inspired Video AI Agent as Rich World Models from Text and Image Inputs
Next let’s look at examples of WorldGPT in various complex video generation control scenarios.
Background Replacement Generated Video
Prompt: "A fleet of ships struggled forward in a howling storm, their sails sailing against the huge waves of the relentless storm. .(A fleet of ships pressed on through the howling tempest, their sails billowing as they navigated the towering waves of the relentless storm.)》
Background replacement stylized generation video
Prompt: "A cute dragon is spitting fire on an urban street."
Object replacement background replacement generated video
Prompt: "A cyberpunk-style robot is A cyberpunk-style automaton raced through the neon-lit, dystopian cityscape, reflections of towering holograms and digital decay projected onto its sleek metal body. and digital decay playing across its sleek, metallic body.)》
As can be seen from the above example, WorldGPT is facing complex The video generation instructions have the following advantages:
1) It better maintains the structure and environment of the original input image;
2) Generates a generated video that conforms to the picture-text description, showing Powerful video generation and customization capabilities;
3) You can customize the generated video through prompt.
To learn more about the principles, experiments, and use cases of WorldGPT, please view the original paper.
VisonGPT
As mentioned earlier, the WorldGPT framework is built on the VisionGPT framework. Next we briefly introduce information about VisionGPT.
VisionGPT was jointly developed by SeekingAI, Stanford University, Harvard University, Peking University and other world-leading institutions. It is a groundbreaking open world visual perception large model framework. The framework provides powerful AI multi-modal image processing capabilities through intelligent integration and decision-making selection of state-of-the-art SOTA large models.
The innovation of VisionGPT is mainly reflected in three aspects:
- First, it uses a large language model (such as LLaMA-2) as the core to decompose the user's prompt request into Detailed step requirements and automatically call the most appropriate large model for processing;
- Secondly, VisionGPT automatically accepts and fuses multi-modal output generated from multiple SOTA large models to generate image processing tailored to user needs. Results;
- Finally, VisionGPT is extremely flexible and versatile, and can support a wide range of application scenarios including text-driven image understanding, generation, and editing without the need for users to fine-tune the model.
- Paper address: https://arxiv.org/pdf/2403.09027.pdf
- Paper title: VisionGPT: Vision- Language Understanding Agent Using Generalized Multimodal Framework
VisionGPT Use Case
As can be seen from the above, VisionGPT can easily achieve 1) instance segmentation in the open world without fine-tuning; 2) prompt-based image generation and editing functions, etc. The workflow of VisionGPT is shown in the figure below.
For more details, please refer to the paper.
VisionGPT-3D
In addition, researchers have also launched VisionGPT-3D, which aims to solve a major challenge in converting text to visual elements: how to efficiently ,Accurately convert 2D images into 3D representations. In this process, we often face the problem of mismatch between the algorithm and actual needs, thus affecting the quality of the final result. VisionGPT-3D proposes a multimodal framework that optimizes this conversion process by integrating multiple state-of-the-art SOTA vision large models. Its core innovation lies in its ability to automatically select the most suitable visual SOTA model and 3D point cloud creation algorithm, and to generate output that best meets user needs based on multi-modal inputs such as text prompts.
- Paper address: https://arxiv.org/pdf/2403.09530v1.pdf
- Paper title: VisionGPT-3D: A Generalized Multimodal Agent for Enhanced 3D Vision Understanding
For more information, please refer to the original paper.
The above is the detailed content of WorldGPT is here: Create a Sora-like video AI agent, "resurrect" graphics and text. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Diffusion can not only imitate better, but also "create". The diffusion model (DiffusionModel) is an image generation model. Compared with the well-known algorithms such as GAN and VAE in the field of AI, the diffusion model takes a different approach. Its main idea is a process of first adding noise to the image and then gradually denoising it. How to denoise and restore the original image is the core part of the algorithm. The final algorithm is able to generate an image from a random noisy image. In recent years, the phenomenal growth of generative AI has enabled many exciting applications in text-to-image generation, video generation, and more. The basic principle behind these generative tools is the concept of diffusion, a special sampling mechanism that overcomes the limitations of previous methods.

Kimi: In just one sentence, in just ten seconds, a PPT will be ready. PPT is so annoying! To hold a meeting, you need to have a PPT; to write a weekly report, you need to have a PPT; to make an investment, you need to show a PPT; even when you accuse someone of cheating, you have to send a PPT. College is more like studying a PPT major. You watch PPT in class and do PPT after class. Perhaps, when Dennis Austin invented PPT 37 years ago, he did not expect that one day PPT would become so widespread. Talking about our hard experience of making PPT brings tears to our eyes. "It took three months to make a PPT of more than 20 pages, and I revised it dozens of times. I felt like vomiting when I saw the PPT." "At my peak, I did five PPTs a day, and even my breathing was PPT." If you have an impromptu meeting, you should do it

The Zhipu large model team is self-developed and built. Since Kuaishou Keling AI has become popular at home and abroad, domestic video generation is becoming more and more popular, just like the large text model in 2023. Just now, another large video generation model product has been officially launched: Zhipu AI officially released "Qingying". As long as you have good ideas (a few words to hundreds of words) and a little patience (30 seconds), "Qingying" can generate high-precision videos with 1440x960 resolution. From now on, Qingying will be launched on Qingyan App, and all users can fully experience the functions of dialogue, pictures, videos, codes and agent generation. In addition to covering the web and App of Zhipu Qingyan, you can also operate on the "AI Dynamic Photo Mini Program" to quickly achieve dynamic effects for photos on your mobile phone.

In the early morning of June 20th, Beijing time, CVPR2024, the top international computer vision conference held in Seattle, officially announced the best paper and other awards. This year, a total of 10 papers won awards, including 2 best papers and 2 best student papers. In addition, there were 2 best paper nominations and 4 best student paper nominations. The top conference in the field of computer vision (CV) is CVPR, which attracts a large number of research institutions and universities every year. According to statistics, a total of 11,532 papers were submitted this year, and 2,719 were accepted, with an acceptance rate of 23.6%. According to Georgia Institute of Technology’s statistical analysis of CVPR2024 data, from the perspective of research topics, the largest number of papers is image and video synthesis and generation (Imageandvideosyn

As a widely used programming language, C language is one of the basic languages that must be learned for those who want to engage in computer programming. However, for beginners, learning a new programming language can be difficult, especially due to the lack of relevant learning tools and teaching materials. In this article, I will introduce five programming software to help beginners get started with C language and help you get started quickly. The first programming software was Code::Blocks. Code::Blocks is a free, open source integrated development environment (IDE) for

We know that LLM is trained on large-scale computer clusters using massive data. This site has introduced many methods and technologies used to assist and improve the LLM training process. Today, what we want to share is an article that goes deep into the underlying technology and introduces how to turn a bunch of "bare metals" without even an operating system into a computer cluster for training LLM. This article comes from Imbue, an AI startup that strives to achieve general intelligence by understanding how machines think. Of course, turning a bunch of "bare metal" without an operating system into a computer cluster for training LLM is not an easy process, full of exploration and trial and error, but Imbue finally successfully trained an LLM with 70 billion parameters. and in the process accumulate

Title: A must-read for technical beginners: Difficulty analysis of C language and Python, requiring specific code examples In today's digital age, programming technology has become an increasingly important ability. Whether you want to work in fields such as software development, data analysis, artificial intelligence, or just learn programming out of interest, choosing a suitable programming language is the first step. Among many programming languages, C language and Python are two widely used programming languages, each with its own characteristics. This article will analyze the difficulty levels of C language and Python

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Video link: https://mp.weixin.qq.com/s/2hX_i7li3RqdE4u016yGhQ Recently, the life vlog of a girl living alone became popular on Xiaohongshu. An illustration-style animation, coupled with a few healing words, can be easily picked up in just a few days.
