Not long ago, OpenAI Sora quickly became popular with its amazing video generation effects. It stood out from the crowd of Wensheng video models and became the focus of global attention. Following the launch of the Sora training inference reproduction process with a 46% cost reduction 2 weeks ago, the Colossal-AI team fully open sourced the world's first Sora-like architecture video generation model "Open-Sora 1.0", covering the entire training process, including data processing , all training details and model weights, and join hands with global AI enthusiasts to promote a new era of video creation.
For a sneak peek, let’s first watch a video of a bustling city generated by the “Open-Sora 1.0” model released by the Colossal-AI team.
A snapshot of the bustling city generated by Open-Sora 1.0
This is just the tip of the iceberg of Sora’s reproduction technology. About the above article The video model architecture, trained model weights, all repeated training details, data preprocessing process, demo display and detailed getting started tutorials, the Colossal-AI team has been fully open sourced on GitHub for free, and the author contacted the company as soon as possible. The team understands that they will continue to update Open-Sora related solutions and latest developments. Interested friends can continue to pay attention to Open-Sora's open source community.
Open-Sora open source address: https://github.com/hpcaitech/Open-Sora
Next, we will delve into several key aspects of Sora’s replication solution, including model architecture design, training methods, data preprocessing, model effect display, and optimized training strategies.
The model adopts the currently popular Diffusion Transformer (DiT) [1] architecture. The author team uses the high-quality open source Vincent graph model PixArt-α [2] that also uses the DiT architecture as the base, introduces a temporal attention layer on this basis, and extends it to video data. Specifically, the entire architecture includes a pre-trained VAE, a text encoder, and a STDiT (Spatial Temporal Diffusion Transformer) model that utilizes the spatial-temporal attention mechanism. Among them, the structure of each layer of STDiT is shown in the figure below. It uses a serial method to superimpose a one-dimensional temporal attention module on a two-dimensional spatial attention module to model temporal relationships. After the temporal attention module, the cross-attention module is used to align the semantics of the text. Compared with the full attention mechanism, such a structure greatly reduces training and inference overhead. Compared with the Latte [3] model, which also uses a spatial-temporal attention mechanism, STDiT can better utilize the weights of pre-trained image DiT to continue training on video data.
STDiT structure diagram
The training and inference process of the entire model is as follows. It is understood that in the training phase, the pre-trained Variational Autoencoder (VAE) encoder is first used to compress the video data, and then the STDiT diffusion model is trained together with text embedding in the compressed latent space. In the inference stage, a Gaussian noise is randomly sampled from the latent space of VAE and input into STDiT together with prompt embedding to obtain the denoised features. Finally, it is input to the decoder of VAE and decoded to obtain the video.
Training process of the model
We provide the The team learned that Open-Sora’s recurrence plan refers to the Stable Video Diffusion (SVD)[3] work and includes three stages, namely:
Each stage will continue training based on the weights of the previous stage. Compared with single-stage training from scratch, multi-stage training achieves the goal of high-quality video generation more efficiently by gradually expanding data.
Three phases of training plan
The first stage uses large-scale image pre-training and the mature Vincentian graph model to effectively reduce the cost of video pre-training.
The author team revealed to us that through the rich large-scale image data on the Internet and advanced grammatical technology, we can train a high-quality grammatical model, which will be used as the next Initialization weights for one-stage video pre-training. At the same time, since there is currently no high-quality spatiotemporal VAE, they used the image VAE pretrained by the Stable Diffusion [5] model. This strategy not only ensures the superior performance of the initial model, but also significantly reduces the overall cost of video pre-training.
The second stage performs large-scale video pre-training to increase the model generalization ability, effectively Master the time series correlation of videos.
We understand that this stage requires the use of a large amount of video data for training to ensure the diversity of video themes, thereby increasing the generalization ability of the model. The second-stage model adds a temporal attention module to the first-stage Vincentian graph model to learn temporal relationships in videos. The remaining modules remain consistent with the first stage and load the first stage weights as initialization. At the same time, the output of the temporal attention module is initialized to zero to achieve more efficient and faster convergence. The Colossal-AI team used open-source weights from PixArt-alpha [2] as initialization for the second-stage STDiT model, and the T5 [6] model as the text encoder. At the same time, they used a small resolution of 256x256 for pre-training, which further increased the convergence speed and reduced training costs.
The third stage fine-tunes high-quality video data to significantly improve the quality of video generation .
The author team mentioned that the size of the video data used in the third stage is one order of magnitude less than that in the second stage, but the length, resolution and quality of the video are higher. By fine-tuning in this way, they achieved efficient scaling of video generation from short to long, from low to high resolution, and from low to high fidelity.
The author team stated that in the Open-Sora reproduction process, they used 64 H800 blocks for training. The total training volume of the second stage is 2808 GPU hours, which is about $7000, and the training volume of the third stage is 1920 GPU hours, which is about $4500. After preliminary estimates, the entire training program successfully controlled the Open-Sora reproduction process to about US$10,000.
In order to further reduce the threshold and complexity of Sora reproduction, the Colossal-AI team also provides The convenient video data preprocessing script allows you to easily start Sora recurrence pre-training, including downloading public video data sets, segmenting long videos into short video clips based on shot continuity, and using the open source large language model LLaVA [7] to generate detailed Prompt word. The author team mentioned that the batch video title generation code they provided can annotate a video with two cards and 3 seconds, and the quality is close to GPT-4V. The resulting video/text pairs can be used directly for training. With the open source code they provide on GitHub, we can easily and quickly generate the video/text pairs required for training on our own data set, significantly reducing the technical threshold and preliminary preparation for starting a Sora replication project.
Video/text pair automatically generated based on data preprocessing script
Let’s take a look at the actual video generation effect of Open-Sora. For example, let Open-Sora generate an aerial shot of sea water lapping against rocks on a cliff coast.
Let Open-Sora capture the majestic aerial view of mountains and waterfalls surging down from the cliffs and finally flowing into the lake.
# In addition to going to the sky, you can also enter the sea. Simply enter prompt and let Open-Sora generate a shot of the underwater world. In the shot, a turtle is on a coral reef. Cruise leisurely.
Open-Sora can also show us the Milky Way with twinkling stars through time-lapse photography.
If you have more interesting ideas for video generation, you can visit the Open-Sora open source community to obtain model weights for free experience. Link: https://github.com/hpcaitech/Open-Sora
It is worth noting that the author team mentioned on Github that the current version only uses 400K training data, and the model Both the generation quality and the ability to follow text need to be improved. For example, in the turtle video above, the resulting turtle has an extra leg. Open-Sora 1.0 is also not good at generating portraits and complex images. The author team listed a series of plans to be done on Github, aiming to continuously solve existing defects and improve the quality of production.
In addition to significantly reducing the technical threshold for Sora reproduction and improving the quality of video generation in multiple dimensions such as duration, resolution, and content, the author team also provides Colossal-AI acceleration system provides efficient training support for Sora recurrence. Through efficient training strategies such as operator optimization and hybrid parallelism, an acceleration effect of 1.55 times was achieved in the training of processing 64-frame, 512x512 resolution videos. At the same time, thanks to Colossal-AI’s heterogeneous memory management system, a 1-minute 1080p high-definition video training task can be performed without hindrance on a single server (8*H800).
In addition, in the report of the author team, we also found that the STDiT model architecture also showed excellent efficiency during training. Compared with DiT using a full attention mechanism, STDiT achieves up to 5 times acceleration as the number of frames increases, which is particularly critical in real-world tasks such as processing long video sequences.
Welcome to continue to pay attention to the Open-Sora open source project: https://github.com/hpcaitech/Open-Sora
The author team stated that they will continue to maintain and optimize the Open-Sora project, and are expected to use more video training data to generate higher quality, longer video content, and support multi-resolution features to effectively promote The implementation of AI technology in movies, games, advertising and other fields.
Reference link:
[1] https://arxiv.org/abs/2212.09748 Scalable Diffusion Models with Transformers.
[2] https://arxiv.org/abs/2310.00426 PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis.
[3] https://arxiv.org/abs/2311.15127 Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets.
[4] https://arxiv.org/abs/2401.03048 Latte: Latent Diffusion Transformer for Video Generation.
[5] https://huggingface.co/stabilityai/sd-vae-ft-mse-original.
[6] https://github.com/google-research/text-to-text-transfer-transformer.
[7] https://github.com/haotian-liu/LLaVA.
[8] https://hpc-ai.com/blog/open-sora-v1.0.
The above is the detailed content of Don't wait for OpenAI, wait for Open-Sora to be fully open source. For more information, please follow other related articles on the PHP Chinese website!