With the successful launch of Sora, the video DiT model has attracted widespread attention and discussion. Designing stable and very large-scale neural networks has always been a focus of research in the field of vision generation. The success of the DiT model brings new possibilities for scaling image generation.
However, due to the highly structured and complex nature of video data, extending DiT to the field of video generation is a challenging task. A team composed of research teams from the Shanghai Artificial Intelligence Laboratory and other institutions answered this question through large-scale experiments.
In November last year, the team had released a self-developed model called Latte, whose technology is similar to Sora. Latte is the world's first open source Wensheng video DiT and has received widespread attention. Many open source frameworks such as Open-Sora Plan (PKU) and Open-Sora (ColossalAI) are using and referring to Latte's model design.
Let’s first look at Latte’s video generation effect.
In general, Latte contains two key modules: pre-trained VAE and video DiT. In the pre-trained VAE, the encoder is responsible for compressing the video from pixel space to latent space frame by frame, while the video DiT is responsible for extracting tokens and performing spatiotemporal modeling to process the latent representation. Finally, the VAE decoder maps the features back Pixel space to generate video. In order to obtain the best video quality, the researchers focused on two important aspects in Latte design, namely the overall structural design of the video DiT model and the best practice details of model training.
(1) Latte overall model structure design study
Figure 1 . Latte model structure and its variants
The author proposed 4 different Latte variants (Figure 1), and designed two Transformer modules from the perspective of spatiotemporal attention mechanism. Two variants (Variant) were studied in each module:
1. Single attention mechanism module, in each module only Contains time or space attention.
2. Multiple attention mechanism module, each module contains both temporal and spatial attention mechanisms (Open-sora reference Variants).
Experiments show that (Figure 2), by setting the same parameter amounts for the four model variants, variant 4 has better performance in FLOPS than the other three variants. There is an obvious difference, so the FVD is also relatively the highest. The other three variants have similar overall performance. Variant 1 achieved the best performance. The author plans to conduct a more detailed discussion on large-scale data in the future.
Figure 2. Model structure FVD
(2) Latte model and training details Exploring the optimal design (The best practices)
In addition to the overall structure design of the model, the author also explored factors that affect the generation effect in other models and training.
1.Token extraction: Two methods, single-frame token (a) and spatio-temporal token (b), were explored. The former only compresses tokens at the spatial level, while the latter compresses spatio-temporal information at the same time. Experiments show that single-frame token is better than spatio-temporal token (Figure 4). Comparing with Sora, the author speculates that the spatio-temporal token proposed by Sora is pre-compressed in the time dimension through video VAE, and similar to Latte's design in the latent space, only single-frame token processing is performed.
Figure 3. Token extraction method, (a) single frame token and (b) space-time token
Figure 4. Token extraction FVD
2. Conditional injection mode: Exploring (a) S-AdaLN and (b) all tokens two ways (Figure 5). S-AdaLN converts condition information into variables in normalization and injects it into the model through MLP. The All token form converts all conditions into a unified token as input to the model. Experiments have shown that the S-AdaLN method is more effective in obtaining high-quality results than all tokens (Figure 6). The reason is that S-AdaLN enables information to be injected directly into each module. However, all token needs to pass conditional information from the input to the end layer by layer, and there is a loss in the process of information flow.
Figure 5. (a) S-AdaLN and (b) all tokens.Figure 6. Conditional injection method FVD
3.
Space-time position encoding: Explore absolute position encoding and relative position encoding. Different position encodings have little impact on the final video quality (Figure 7). Due to the short generation duration, the difference in position encoding is not enough to affect the video quality. For long video generation, this factor needs to be reconsidered.
Figure 7. Position encoding method FVD4.
Model initialization : Explore the impact of using ImageNet pre-training parameter initialization on model performance. Experiments show that the model initialized using ImageNet has a faster convergence speed, however, as the training proceeds, the randomly initialized model achieves better results (Figure 8). The possible reason is that there is a relatively large distribution difference between ImageNet and the training set FaceForensics, so it failed to promote the final results of the model. For the Vincent video task, this conclusion needs to be reconsidered. In the distribution of general data sets, the content spatial distribution of images and videos is similar, and the use of pre-trained T2I models can greatly promote T2V.
Figure 8. Initialization parameter FVD5.
Image and video joint training: Compress videos and images into a unified token for joint training. The video token is responsible for optimizing all parameters, and the image token is only responsible for optimizing spatial parameters. Joint training has significantly improved the final results (Table 2 and Table 3). Both picture FID and video FVD have been reduced through joint training. This result is consistent with the UNet-based framework [2 ][3] are consistent.
6.Model Sizes: 4 different model sizes were explored, S, B, L and XL (Table 1). Expanding the scale of video DiT will significantly help improve the quality of generated samples (Figure 9). This conclusion also proves the correctness of using the Transformer structure in the video diffusion model for subsequent scaling up.
Table 1. Latte model scale of different sizes
Figure 9. Model size FVD Author Trained on 4 academic data sets (FaceForensics, TaichiHD, SkyTimelapse and UCF101) respectively. The qualitative and quantitative results (Table 2 and Table 3) show that Latte achieved the best performance, which proves that the overall design of the model is excellent.
##Table 2. UCF101 Image Quality Assessment
Table 3. Latte and SoTA video quality evaluation ## Vincent Video Extension Discussion and SummaryQualitative and quantitative analysis
Latte, as the world’s first open source Vincent video DiT, has achieved promising results, but due to computational There is a huge difference in resources, and there is still a big gap compared with Sora in terms of generation clarity, fluency and duration. The team welcomes and is actively seeking cooperation of all kinds, hoping to use the power of open source to create a self-developed large-scale universal video generation model with excellent performance.
The above is the detailed content of Detailed explanation of Latte: the world's first open source Vincent video DiT launched at the end of last year. For more information, please follow other related articles on the PHP Chinese website!