Open-Sora has been quietly updated in the open source community. It now supports video generation up to 16 seconds, with resolutions up to 720p, and can handle any aspect ratio of text to image, text to Video, image to video, video to video and infinite length video generation needs. Let's try it out.
Generate a horizontal Christmas snow scene and post it to the b site
Generate a vertical screen and tremble The sound
can also generate a 16-second long video, now everyone can become addicted to screenwriting
How to play? Guidance
GitHub: https://github.com/hpcaitech/Open-Sora
What’s even cooler is that Open-Sora is still All open source, including the latest model architecture, the latest model weights, multi-time/resolution/aspect ratio/frame rate training process, complete process of data collection and preprocessing, all training details, demo examples and Detailed getting started tutorial.
Overview of the latest features
The author team officially released the Open-Sora technical report [1] on GitHub. According to the author’s understanding, this update mainly includes the following key features:
Space-time diffusion model ST-DiT-2
The author team stated that they are interested in Open-Sora The STDiT architecture in 1.0 has undergone key improvements aimed at improving the model’s training stability and overall performance. For the current sequence prediction task, the team adopted the best practices of large language models (LLM) and replaced the sinusoidal positional encoding in temporal attention with the more efficient rotational positional encoding (RoPE embedding). In addition, in order to enhance the stability of training, they referred to the SD3 model architecture and further introduced QK normalization technology to enhance the stability of half-precision training. In order to support the training requirements of multiple resolutions, different aspect ratios and frame rates, the ST-DiT-2 architecture proposed by the author's team can automatically scale position encoding and handle inputs of different sizes.
Multi-stage training
According to the Open-Sora technical report, Open-Sora adopts a multi-stage Training method, each stage will continue training based on the weights of the previous stage. Compared with single-stage training, this multi-stage training achieves the goal of high-quality video generation more efficiently by introducing data step by step.
In the initial stage, most videos use 144p resolution, and are mixed with pictures and 240p, 480p videos for training. The training lasts about 1 week, with a total step size of 81k. In the second stage, the resolution of most video data is increased to 240p and 480p, the training time is 1 day, and the step size reaches 22k. The third stage was further enhanced to 480p and 720p, the training duration was 1 day, and the training of 4k steps was completed. The entire multi-stage training process was completed in about 9 days. Compared with Open-Sora1.0, the quality of video generation has been improved in multiple dimensions.
Unified image-based video/video-based video framework
The author team stated that based on the characteristics of Transformer, it can Easily extend the DiT architecture to support image-to-image and video-to-video tasks. They proposed a masking strategy to support conditional processing of images and videos. By setting different masks, various generation tasks can be supported, including: graphic video, loop video, video extension, video autoregressive generation, video connection, video editing, frame insertion, etc.
Mask strategy that supports image and video conditional processing
The author team stated that inspired by the UL2[2] method, they introduced a random masking strategy in the model training stage. Specifically, the frames that are masked are selected and unmasked in a random manner during the training process, including but not limited to unmasking the first frame, the first k frames, the next k frames, any k frames, etc. The authors also revealed to us that based on experiments with Open-Sora 1.0, when applying the masking strategy with 50% probability, the model can better learn to handle image conditioning with only a small number of steps. In the latest version of Open-Sora, they adopted a method of pre-training from scratch using a masking strategy.
In addition, the author team also thoughtfully provides a detailed guide for masking policy configuration for the inference stage. The tuple form of five numbers provides great help when defining the masking policy. Flexibility and control.
Mask policy configuration instructions
Support multiple time/resolution/length and width Ratio/frame rate training
OpenAI Sora’s technical report [3] points out that training using the resolution, aspect ratio, and length of the original video can increase sampling flexibility, Improve framing and composition. In this regard, the author team proposed a bucketing strategy.
How to implement it specifically? Through in-depth reading of the technical report published by the author, we learned that the so-called bucket is a triplet of (resolution, number of frames, aspect ratio). The team has predefined a range of aspect ratios for videos at different resolutions to cover most common video aspect ratio types. Before the start of each training cycle epoch, they reshuffle the data set and assign the samples to the corresponding buckets based on their characteristics. Specifically, they put each sample into a bucket whose resolution and frame length are less than or equal to that video feature.
Open-Sora bucketing strategy
The author team further revealed that in order to reduce computing resource requirements , they introduce two attributes (resolution, number of frames) for each keep_prob and batch_size to reduce computational costs and enable multi-stage training. This way they can control the number of samples in different buckets and balance the GPU load by searching for a good batch size for each bucket. The author elaborates on this in the technical report. Interested friends can read the technical report published by the author on GitHub to get more information: https://github.com/hpcaitech/Open-Sora
Data collection and pre-processing process
The author team even provides detailed guidance on the data collection and processing steps. According to the author's explanation in the technical report, during the development process of Open-Sora 1.0, they realized that the quantity and quality of data are extremely critical to cultivating a high-performance model, so they devoted themselves to expanding and optimizing the data set. They established an automated data processing process that follows the singular value decomposition (SVD) principle and covers scene segmentation, subtitle processing, diversity scoring and filtering, as well as the management system and specification of the data set. Likewise, they selflessly share data processing related scripts to the open source community. Interested developers can now use these resources, combined with technical reports and code, to efficiently process and optimize their own data sets.
Open-Sora data processing process
Video generation effect display
The most eye-catching highlight of Open-Sora is that it can transform the scene in your mind, Capture and convert text descriptions into moving videos. The images and imaginations that flash through your mind can now be permanently recorded and shared with others. Here, the author tried several different prompts as a starting point.
For example, the author tried to generate a video of visiting a winter forest. Not long after the snow fell, the pine trees were covered with white snow. Dark pine needles and white snowflakes were scattered in clear layers.
Or, in a quiet night, you are in a dark forest like those described in countless fairy tales, with the deep lake sparkling under the bright stars all over the sky.
The night view overlooking the prosperous island from the air is even more beautiful. The warm yellow lights and the ribbon-like blue water make people instantly mesmerized. Pull into the leisurely time of vacation.
#The city is bustling with traffic, and the high-rise buildings and street shops with lights still on late at night have a different flavor.
In addition to scenery, Open-Sora can also restore various natural creatures. Whether it's a bright red flower,
or a chameleon slowly turning its head, Open-Sora can generate more realistic videos.
The author also tried a variety of prompt tests and provided many generated videos for your reference, including different content, different resolutions, and different Aspect ratio, different duration.
#The author also found that Open-Sora can generate multi-resolution video clips with just one simple command, completely breaking the creative restrictions.
Resolution: 16*240p
##Resolution : 32*240pResolution: 64*360p
Resolution: 480*854p
We can also feed Open-Sora a static image to generate a short video
Open-Sora can also cleverly connect two static images. Tap the video below to take you to experience the afternoon At dusk, the light and shadow change, and every frame is a poem of time.
For another example, if we want to edit the original video, with just a simple command, the originally bright forest ushered in a heavy snowfall.
We can also use Open-Sora to generate high-definition images
It is worth noting that Open-Sora’s model weights are completely free It's public on their open source community, so you might as well download it and give it a try. Since they also support the video splicing function, this means you have the opportunity to create a short short story with a story for free to bring your creativity into reality.
Weight download address: https://github.com/hpcaitech/Open-Sora
Although good progress has been made in reproducing Sora-like video models, the author team also humbly points out that the currently generated videos still need to be improved in many aspects: including the generation process noise issues, lack of temporal consistency, poor character generation quality, and low aesthetic scores. Regarding these challenges, the author team stated that they will give priority to solving them in the development of the next version in order to achieve higher video generation standards. Interested friends may wish to continue to pay attention. We look forward to the next surprise that the Open-Sora community brings to us.
Open source address: https://github.com/hpcaitech/Open-Sora
The above is the detailed content of Open-Sora comprehensive open source upgrade: supports 16s video generation and 720p resolution. For more information, please follow other related articles on the PHP Chinese website!